Monday, 13 October 2025

Structured Output from Flow Agent Actions in Winter '26

Image created by ChatGPT 5o based on a prompt by Bob Buzzard


Introduction

When preparing for the Credera Winter '26 release webinar, the release notes for this feature gave me significant pause for thought. Not because it was an awesome change that I'd been waiting ages for, nor that it was something out of left field that I couldn't wait to try. Instead it was because I didn't understand how it worked. The release notes talked about custom agent actions returning specific fields, so did that mean it was the action itself that returned complex data types? There was only one way to find out.

Giving it a Go

Once I'd waited for my Agentforce developer edition to be upgraded to Winter '26 I was able to try out this new functionality. In order to understand how much effort I had to put in around my actions, I started off putting in zero effort. Masterful inactivity has always served me well!

The first thing I tried was a simple screen flow, with the first element an AI Agent Action, as this was the key to defining structured output. 


First crack out of the box and I have a winner! Without even having to create a custom action, Copilot for Salesforce is available as an AI Agent Action. Clicking into this showed that the new Structured Output functionality was available with this action.


After a few false starts (the AI Agent Action wouldn't accept collections of records etc) I had a simple flow that would take in an account Id, retrieve the opportunities associated with the account, convert them to a simple JSON structure and ask the AI Agent to calculate the total amount of the opportunities. For my AI Agent Action, I give a relatively simple prompt grounded with the opportunity information.

and for my structured output, I specified a single field - the total amount.


Executing this in debug mode gave me the answer to my first question - did that mean it was the action itself that returned complex data types? No, in this case I'm simply sending a request to an LLM and it will give a text response. The Salesforce platform handles the conversion to structured output.



And I can then use that structured output like I would any other complex object, in this case in a screen displaying the total.


Note that the "container" for the structured output is actually a Dynamic Apex Class that you can access through setup:


Conclusion


To answer my earlier question:

Did that mean it was the action itself that returned complex data types? 

It did indeed - there was no need for me to create anything outside of the action for the results to be stored in, I just defined the field and used natural language to explain what should be stored in there. The platform created an Apex class to store the information and populated it from the LLM output.

This is pretty cool - it allows low code to convert the unstructured output from an LLM into structured output for use in downstream processing. Prior to this feature an Apex developer would likely have been needed to help, but now it can all be handled by a low coder.

Of course this is a terrible example. If I really wanted the total, why wouldn't I calculate it while iterating the records, rather than pulling together a bunch of text and then incurring the expense and time overhead of an LLM callout - Agents Augment Automation, they don't replace it. 

I chose this example as it was easy to put together to prove the concept. In the real world I'd only use LLMs to handle tasks that regular automation couldn't, like figuring out the customer sentiment from a bunch of activities associated with the Opportunities. More work to set that up though, and harder to explain.

More Information



Tuesday, 7 October 2025

Agentforce Vibes - First Look, Data Model

  Image created by ChatGPT 5 based on a prompt from Bob Buzzard 


Introduction

We all knew this was coming, right? Salesforce has long considered itself the cool kid in enterprise technology, so they were always going to jump on the vibe coding bandwagon. After reading the Salesforce Developers blog post on Agentforce Vibes I was keen to give it a go. 

I took the approach that I wanted the Agent to be truly autonomous, so my plan was to agree with everything it wanted to do, and then once everything was deployed to my org, I'd try it out and review everything at that point. This is how I'd work with a human junior assisting me, although I'd obviously be available to talk through their ideas if they wanted, which agents typically don't need.

Setup

Setup was as easy as it gets. I'm using VS Code and simply by switching to the Agent Dev view we were off and vibing.


I spun up a scratch org, activated the MCP server for the Salesforce CLI, and then tried to figure out what to do with it.

What to Vibe Out

I didn't just want to vibe some straightforward additional Apex to an existing code repository, as I'm sure that's one of the smoke tests of this new functionality. If it can't do that, it's going to be a tough time on the socials for Salesforce. The part of application development I've always wanted to speed up, especially for my side projects, is creating the data model and permission sets. Doing this directly through XML metadata is quite error-prone, and doing it through Setup takes a while. As a coder, this is typically a task I just want out of the way so I can start cutting some Apex. 

I decided to give it the kind of task that I was intimately familiar with, as I'd given it to many graduates back in my BrightGen days. The concept is an onboarding application, with a bunch of templated journeys broken down into steps that can be instantiated and assigned to a new joiner, with a specified start date and manager etc. There's a bunch of requirements around calculating completion dates and current state that require roll up summaries and formula fields, so it's a good introduction to data modelling for those new to Salesforce.

I created a prompt of around 130 words that covered the key concepts in natural language. I avoided giving any clues, so rather than talking about roll up summaries and master details, I used phrases like "this is calculated from the max values in the steps for the journey". Probably quite close to the real instructions that I gave humans.

I gave the agent the prompt and asked it to generate a plan, which it did. 

The plan was frankly excellent.

It had picked up all the nuance of the requirements - identified where Master-Detail relationships were required, understood that templates were separate to the actual journeys and needed to be modelled as their own objects, and came up with recommendations around security and deployment. It also suggested a bunch of extra fields, permission sets, and a flow to create journeys from templates. Most of this was later tasks for the grads so I told it to skip those. I then signed off on the plan and sat back to watch the agent at work.

Creating the Metadata

One thing I found a little tedious was the agent wanted me to okay every file before creating it, even though I'd ticked the box for auto-approve. I didn't check any further into this, so it's possible there's another setting I needed to look at. I typically don't review people's work piecemeal as they create individual components, so I just okayed them all straight away. What I saw as they were being generated looked plausible, and after about 10 minutes it had completed all the work. So I asked to to deploy its work to my scratch org.

Deploying the Files (or where it all went awry)

The initial attempt at deployment threw up an error that you can't specify both the apexTests and apexTestLevel parameters. Slightly unexpected, but it was easily able to move on. 

The next attempt threw up a few errors:

  • The agent had used <picklist> instead of <valueSet> which wasn't compatible with the metadata API version I'd specified. Slightly unexpected again, as I'd been asked which API version I wanted to generate the metadata for, but again something easily fixed.
  • It had set a Private sharing model for an object on the detail side of a Master-Detail relationship. It turns out this was for the sharing model and the external sharing model, which caused problems in later attempts as it changed one but not the other.
  • The roll up summary metadata fields weren't correct, so the agent suggested changing them. It turned out that the suggested new fields were no more valid (<summaryTable>).
As the agent was in charge, I okayed all of its suggested changes and it tried again. We then entered a doom cycle of attempting to deploy, getting errors, and highly variable suggestions for fixes. 

I think that the agent wanted to apply fixes for every error in a one go, which isn't always the best approach with deployments, as one error can cascade into a lot of failures. My approach is to fix errors one at a time and retry the deployment, so that I have a handle on what I've changed and what difference it made. The agent would want to change the metadata to fix every error at once, even when the underlying error was a custom object failing to deploy. 

My favourite was where a parent object in a relationship couldn't deploy because the roll up summary metadata was wrong, which threw an error on the child object. The agent felt that this was an issue with the child object and the case was the relationship being Master-Detail. It changed the relationship  to a Lookup field, but sadly it left the roll up summary metadata in place, thus finding more errors at the next deployment.

After a couple of attempts the agent had used up all my requests and switched me to the core model. I wondered if this might be better, given that it's a Salesforce hosted (and presumably trained) model, but sadly that wasn't the case. If I was paying for requests to be burned by something that wasn't even following documented metadata standards, I'd likely be a little miffed.

Eventually the agent proudly announced that it had completed the deployment, even though I could see the request had failed.



Creating and Deploying Permission Sets (or the Folie à Deux


Checking the org also confirmed nothing had been deployed, but this is vibe coding where the facts don't matter and the agents are in charge, so I feigned ignorance and asked it to now create some permission sets for me - an admin and a manager, obviously giving quite a lot of detail.

Again, the plan here was excellent - it understood my prompt, picked out the nuance and generated plausible files. The agent had clearly been emboldened by how easily I was tricked into believing the deployment was successful, and jumped straight to it. This time I decided I couldn't continue to enable its flights of fancy and called it out. It folded like a cheap suit. 



This was comfortingly familiar - often ChatGPT and others give me completely incorrect code and when called out on it, fess up immediately. It didn't offer to fix it again though, just told me it was sorry, the system was broken, and what needs fixing. Vibe Confessions.

At this point I took over and fixed the errors - <writable> instead of <editable> for a custom field in the permission set was the most egregious, in case you were wondering. 

But Seriously Folks!


It's easy to mock Agentforce Vibes (I mean, just look at what it was doing - this stuff writes itself!), but the issues I've identified will be easily fixed. It reminds me of the first release of Agentforce for Developers - the test class code generated by that was fairly awful, but it wasn't long before it was quite reliable. If we didn't have Dreamforce '25 starting in a week, I'm pretty sure we wouldn't be seeing this yet. I guess that data model metadata might also not be its strong suit, but it's GA and it understood the ask, so I think it's fair enough to call out the performance.

So for this, admittedly slightly complex, data modelling task, Agentforce Vibes is top tier for planning, but decidedly middling for execution. You'll need to be experienced with Salesforce metadata to guide it through generating the correct metadata, or fixing it yourself. In terms of generating a list of tasks to carry out in the UI, it was pretty amazing, it just wasn't great at handling those tasks itself in metadata. 

While the above probably reads as somewhat negative (and yes, snarky!), that isn't really the case. Using Agentforce Vibes was way faster than trying to create this all myself.  Either via XML or through the UI. I probably had it all in my scratch org with appropriate permission sets in around 60-90 minutes. The caveat here is that if I didn't have my many years of experience working with metadata, I doubt I'd have got it deployed at all. 

Once the execution catches up with the planning, which I'm sure won't be that long, it will be a different story and a really helpful sidekick.

This is only my first look - I'll be back to this scenario to vibe code some flow, lightning component and asynchronous Apex and I'll keep you updated on how I got on.

Related Information



Friday, 8 August 2025

Software Testing on the Salesforce Platform


Update 10/08/2025 : Second version now pushed and the price has increased to $7.99. 

Introduction

Since the start of my developer journey with Salesforce, there's one phrase I've trotted out with remarkable regularity:

"When I have more time I'm going to write a book about testing". 

Typically this would be when I was reviewing unit tests and found the developers unable to tell me why they'd created a particular test, other than it caused a few lines of code to be executed and thus covered for deployment to production. Other times it was after talking to graduates to find out that UK universities still don't cover unit testing in Computer Science degrees, and they didn't really know how to get started. I'm now at the point where I have more time, so after talking the talk, it's time to start walking the walk. 

TL;DR - it's a work in progress, but available for purchase on Leanpub at Software Testing on the Salesforce Platform

How to Publish?

I didn't want to go the traditional route, but also didn't want the vanilla self-publishing experience. Instead I was looking for some kind of iterative mechanism where I didn't spend a year creating a single point in time snapshot and then never returned to it (outside of spending another year on another point in time second edition etc). I also didn't want something that cost $50+. While that's not a huge amount of money in places like Western Europe and the USA, there are other locations where it's prohibitive. 

I remembered reading that Steven Sinofsky wrote his Hardcore Software book on Substack, but when I dug into this it was a $100/year subscription over 2 years, so considerably more expensive rather than less. Also, while this was a reasonable way to break up the writing, there's no easy way to combine all the newsletters back into a book, hence he published it as a book too. 

Enter Leanpub

I was familiar with Leanpub having bought a couple of books when I was learning Node.JS, and when I looked into publishing on that platform it was pretty much a perfect fit for what I was trying to do:

  • Publish Early, Publish Often.

This is a core principle of Leanpub  - rather than waiting until everything is complete, you can publish as soon as you feel you have enough content to make it worthwhile, then you push out updates as and when you feel it is appropriate. This also allows you to pivot based on reader feedback. It also allows you to give up if there is zero interest I guess :)

  •  Readers Pay Once

    Rather than paying out for the initial edition, then paying again for each future edition, once you buy a book on Leanpub you get all the updates included forever. 

  • Flexible Pricing

    I can set (and change) the price as I desire. I also have the option to set a minimum price (what I will sell it for) and a suggested price (what I'd like to sell it for). That said, there's nothing to stop someone paying more than these prices if they really want to spend $50 or more!

Version 1

The first version of Software Testing on the Salesforce Platform weighs in at 147 pages and has a suggested/minimum price of $5.99 - it is by no means the finished article, as I still have a lot of ground I want to cover. There is potential downside to purchasing at this time, in that I'm under no obligation to complete it so this might be all you get. I do intend to finish it, and I feel that it represents value for money in its current form, but your opinion may differ. 

To reiterate on the pricing, if you buy this version then you get every subsequent version free forever. The price will increase as I add content, so early adopters are rewarded for believing in me with a chance to get in at the rock bottom.

When you purchase on Leanpub an account is automatically created for you - this isn't to allow me to pester you, but to allow you to access the latest version when it is published. 



Tuesday, 5 August 2025

Agentforce Custom Lightning Types

Introduction

The Summer '25 release of Salesforce (or maybe prior, given Agentforce releases are on a monthly cycle) introduced Custom Lightning Types, which finally gives us a degree of fine-grained control over the user interface. At the time of writing (August 2025) this is only for actions that use Apex classes for requests and responses.

In essence, a Custom Lightning Type configures an Agentforce user interface override for an Apex class. The Apex class can either be used as the input to send information to an Agentforce action, or as the output containing the response to the user request. 

Example

I've been playing around with a Custom Lighting Type to display overview information from my Limit Tracker that I created for my session at London's Calling 2025. For this I wanted two custom types:

  • An input that captured the number of days of Limit Snapshots to include in the overview
  • An output that showed the Heap and CPU details from the snapshots
I also wanted these to be more interesting than a simple form field to capture an integer and a list of values, but that only comes into play when developing the Lightning Web Components to handle the user interface, so I'll come to those later.

Output

My custom Apex action returns the following class:

public class LimitConsumption { 
    @InvocableVariable
    public String name;
                
    @InvocableVariable
    public List<LimitSnapshot> snapshots;         
        
    public LimitConsumption(String name, List<LimitSnapshot> snapshots) {
        this.name = name;
        this.snapshots = snapshots;
    }
}

It can sometimes be tricky to pinpoint the exact class returned from an action, as they tend to be inner classes and nested inside response wrappers. An easy way to find out exactly which class Agentforce will use is to go through the New Action process and see what it picks up for the output:


If you aren't from a developer background, the class reference might look a bit odd, but it breaks down as:

  • c__ this is the namespace the class is present in. c__ means the default (or lack of) namespace. 
  • LimitConsumptionOverview is the outer, or containing class
  • The $ separator indicates this is an inner class of LimitConsumptionOverview
  • LimitConsumption is the name of the actual inner class 
Once I know which class my type with will be working with, I can create an entry in the lightningTypes source folder, in this case limitConsumptionResponse, to make it clear this is used to render the response from an Agent.
   
The schema.json file configures the type to work with my custom class:
{
  "title": "Limit Consumption",
  "description": "Overview of limit consumption by a particular piece of Apex code",
  "lightning:type": "@apexClassType/c__LimitConsumptionOverview$LimitConsumption"
}

The lightning:type entry is the key piece of wiring that couples this Custom Lightning Type with my Apex class.

Next I had to decide how I wanted to render this information. Something that couldn't easily be found on the regular detail pages was appealing, and as it is handled by a Lightning Web Component I have access to the full power of JavaScript and the Lightning Design System. I decided to return a tabbed interface containing a chart :



and a list of values:



Note that I've taken a leaf out of the Salesforce examples and my custom Apex action returns the same list of fake data regardless of how it's invoked.

You can find the Lightning Web Component at : limitConsumption - it uses the Chart.js library to create the bar chart.

Once my Lightning Web Component is in place, I need to create a file in my Custom Lightning Type folder named lightningDesktopGenAi named renderer.json:

  {
    "collection": {
      "renderer": {
        "componentOverrides": {
          "$": {
            "definition": "c/limitConsumption"
          }
        }
      }
    }
  }

The key aspect here is the componentOverrides which defines my Lightning Web Component (limitConsumption in the default namespace) as the override to render the type. 

Input

My custom Apex action relies on the following input class:

public with sharing class SliderIntegerWrapper {
    @InvocableVariable
    public Integer value;
}

Note that this is simply wrapping an Integer value - while lots of the examples I've seen are around complex classes containing multiple values/nested classes, they don't have to. In this case I just want a funkier way to enter the value, so I need an Apex class to hold it.

To capture the information I decided to go with a slider component - note that this did give Agentforce a little trouble with the rendering - once the value got to double figures it wrapped to the next line!


Similar to the output, I create a lightningType folder named sliderInteger and define the schema.json to couple the type to my class:

{
  "title": "Slider Integer",
  "description": "Slider Integer",
  "lightning:type": "@apexClassType/c__SliderIntegerWrapper"
}

as this time my class is top level, the entry just contains c__ (for the default namespace) and the name of the class.

The Lightning Web Component that provides the input slider can be found at sliderInteger

This time, once my Lightning Web Component is in place, I need to define a lightningDesktopGenAi/editor.json file to define it as the override:

{
  "editor": {
    "componentOverrides": {
      "$": {
        "definition": "c/sliderInteger"
      }
    }
  }
}

The Agent Action

Now that I have my Custom Lightning Types defined and the overrides configured, I can create the Agentforce action that makes use of them, based on my LimitConsumptionOverview invocable Apex class. When defining the inputs I choose my SliderInteger type for the daysWrapper :


and for outputting the limit consumption information, I choose the LimitConsumptionResponse type:


Executing this action brings both my types into play, as can be seen from the following short video:




Lessons Learned

Something I'd strongly recommend when working with Custom Lightning Types is to build yourself a simple test page so that you can try them out without having to go via Agentforce every time you make a change. While it doesn't sound onerous, making requests and supplying inputs gets tedious very quickly! I created a test harness Lightning Web Component (limitConsumptionHarness) that sends fake data to the output component and handles the event from the input component, and it's saved me a lot of time.

The first scratch org I tried this in, I'd already used the example from the Salesforce docs, and for some reason only that one would be picked up. After spending a couple of hours trying various things, and creating yet another set of custom types, I spun up a new scratch org and it all worked fine. No idea what went wrong, but by the same token I was learning and tweaking so it could have been anything.

Make sure to use distinct names for your types, classes and Lightning Web Components. My first attempt I used the same name for the type and the component and quickly lost track of what I was configuring where.

More Information








Friday, 25 April 2025

Evaluate Dynamic Formulas in Apex in Summer '25


Image generated by ChatGPT o3 in response to a prompt by Bob Buzzard
Proving yet again AI isn't good at putting text in images!

Introduction

The ability to evaluate dynamic formulas in Apex, which went GA in the Spring '25 Salesforce release, gets a minor but very useful improvement in the Summer '25 release - template mode. In this mode you can use merge syntax to refer to record fields, making formulas that concatenate strings much easier to specify.  Using the example from the release notes,  rather than writing the formula to combine an account name and website as a clunky concatenation :

    name & " (" & website & ")"

we can write:

    {!name} ({!website})

and tell the formula builder to evaluate it as a template. This felt like a good addition to my formula tester page, that I created to demonstrate the Spring '25 functionality, and it also uncovered some unexpected behaviour.

The Sample

My formula tester page is updated to allow the user to specify whether the formula should be evaluated as a template or not via a simple checkbox under the text area to input the formula:


I've also tried to make the page helpful and added an onchange handler to the text area that toggles the Formula is template checkbox based on whether the entered text contains the patter {! - note that the user can override this if they need that string literal for some other reason:



Note also that there's a timeout of 1 second in the onchange handler so it will only fire when the user has (hopefully) finished typing. 

There will be code


The revised Apex method to build and evaluate the formula is as follows:

@AuraEnabled
public static String CheckFormula(String formulaStr, String sobjectType, String returnTypeStr,
                                  Id recordId, Boolean formulaIsTemplate)
{
    FormulaEval.FormulaReturnType returnType = 
                   FormulaEval.FormulaReturnType.valueof(returnTypeStr);

    FormulaEval.FormulaInstance formulaInstance = Formula.builder()
                                    .withType(Type.forName(sobjectType))
                                    .withReturnType(returnType)
                                    .withFormula(formulaStr)
                                    .parseAsTemplate(formulaIsTemplate)
                                    .build();


    //Use the list of field names returned by the getReferenced method to generate dynamic soql
    Set<String> fieldNames = formulaInstance.getReferencedFields();
    Set<String> lcFieldNames=new Set<String>();
    for (String fieldName : fieldNames)
    {
        lcFieldNames.add(fieldName.toLowerCase());
    }
    if (lcFieldNames.isEmpty())
    {
        lcFieldNames.add('id');
    }

    String fieldNameList = String.join(lcFieldNames,',');
    String queryStr = 'select ' + fieldNameList + ' from ' + sobjectType + 
                      ' where id=:recordId LIMIT 1'; //select name, website from Account
    SObject s = Database.query(queryStr);
    Object formulaResult=formulaInstance.evaluate(s);
    system.debug(formulaResult);

    return formulaResult.toString();
}
The bit I expected to change was the additional formulaIsTemplate parameter and invoking parseAsTemplate on the builder. While testing however, I found that if I built a string literal without using any fields, my fieldNameList parameter was empty and I got an error running the SOQL query of select from Account where id='<blah>'.

My first attempt at a fix was to skip the query if the field list was empty, but the evaluate method errors when passed a null sObject parameter. No problem, I'll just add the Id field if the Set of field names is empty. 

Turns out there was a problem. I checked if the field named Id was in the Set, which it wasn't, then added it. Executing the query duly errored as I had a duplicate field in my query. It turns out that the getReferencedFields method isn't case agnostic and it considers Id as a different field to id and returns them both in the Set.

To confirm this I ran the following Apex:

FormulaEval.FormulaInstance formulaInstance = Formula.builder()
		.withType(Schema.Account.class)
		.withReturnType(FormulaEval.FormulaReturnType.STRING)
		.withFormula('{!name} {!Name} {!NaMe} {!NAME}')
		.parseAsTemplate(true)
		.build();
        
String fieldNameList = String.join(formulaInstance.getReferencedFields(),',');
System.debug('Field name list = ' + fieldNameList);
and got the output:

    09:55:32:043 USER_DEBUG [9]|DEBUG|Field name list = NAME,NaMe,Name,name

So I had to add some extra code to iterate the field names, lower case them and add them to a new Set to remove any duplicates. Then I could check if it was empty and if it was add the id field.

You can find the updated code in my Summer 25 samples repository - note that this needs a Salesforce instance on the Summer '25 which, at the time of writing (April 25th), is pre-release orgs. Sandboxes are available on May 9th and scratch orgs on May 11th, assuming Salesforce hit their dates.

More Information





Tuesday, 1 April 2025

Secret Agentforce Pen

The Artificial Intelligence revolution continues to upend the technology industry as Salesforce makes its first move into hardware devices with the Secret Agentforce Pen.

The Secret Agentforce Pen resembles a spy pen that everyone will know and love from their youth, but with a key difference - rather than simply recording a conversation, the embedded Agentforce connection acts on what it hears. 

No more running workshops to find out what your users are struggling with when using your Salesforce implementation, simply place the Secret Agentforce Pen in an unobtrusive place in the office and capture their views as they work. 

Any issues or ideas are seamlessly turned into Cases, and the best part is that as your users have no idea they are being recorded, you'll get the unvarnished truth!

Or wear your Secret Agentforce Pen with pride to take action on those casual conversations in the kitchen room. Delight your users when Agentforce implements their request by the time they've returned to their desk - they'll think you are some kind of wizard!

According to a source at Salesforce, speaking on condition of anonymity as they were not authorised to share information about the new product, "We've tried various prototypes over the last 18 months, but struggled to find that blend of cutting-edge Generative AI and fun products from the classified pages of kid's comics in the 70s. The Secret Agentforce Pen was the culmination of this search, providing that elusive mix of cheap retro-style and modern functionality."

Pricing for the Secret Agentforce Pen has not yet been announced, but the same source confirmed that in keeping with the rest of the Agentforce product set, it will be confusing enough that most customers will be scared to use it. 

Saturday, 22 March 2025

Keep Your Agentforce Dev Org Alive and Kicking


Image generated by OpenAI GPT4o in response to a prompt by Bob Buzzard

This post was updated on 12th April to point the Github repo links to V1.0, as a pull request to allow multiple dev orgs to be "renewed" at once was merged to the main branch. Thanks Warren Walters for the PR.

Introduction

One of the major announcements at TrailblazerDX '25 was the availability of Salesforce Developer Editions with Agentforce and Data Cloud. This is something that just about everyone has been asking for since the first generative AI features went GA in April 2024. Everything else that wasn't associated with a paid contract expired - Trailhead specials after 5 days and Partner SDOs couldn't be taken past 90 days, and even getting that far required raising a case to extend it past the default 30 days. The icing on the cake was the metadata support wasn't fantastic, which meant manually recreating features after spinning up a new org, which gets a bit samey after the fifth time.

Developer Editions don't live forever though - if you don't use them you lose them. After 180 days for the non-Agentforce variant (now known as legacy) and an evanescent 45 days if you want the generative AI features. Although to be fair, the signup page states that Salesforce "may" terminate orgs that are unused for 45 days rather than "will", but if you've put a lot of effort in you don't want to take any risks.

45 days sounds like a long time and it's easy to assume you'll manage to remember a login every 6 weeks or so, but my experience is that it's easy to get distracted and miss a slot. Machines are much better at remembering to do things on a schedule, so this is one task that I prefer to hand off - in this case to a Github Action.

Access Dev Org Action

Github Actions allows automation of software workflows through YAML (YAML Ain't Markup Language) files. Typically I'd use actions for continuous integration tasks on a project - automated build and test every night, for example - but I can also use it for something simpler. In this case, running a Salesforce CLI command against my dev org. If you set up your environment and secret name as I have, you'll be able to use my YAML file exactly as is.

Setup Dev Org

The first thing to do is get your new dev org and setup CLI access. Sign up for a new Agentforce dev org at : 

https://www.salesforce.com/form/developer-signup/?d=pb

Then connect it to the Salesforce CLI using the command:

> sf org login web -o AFDevOrg

Login with your username/password and approve the oauth access.

Execute the following CLI command:

> sf org display -o AFDevOrg --json --verbose

and copy the Sfdx Auth Url output value

Now you have this value, you can move on to the Github steps.

Create Your Repository

In order to use Github actions on a free account, your repository will need to be public. This doesn't mean that your Dev Org becomes public property, as you'll be storing the Sfdx Auth Url in an environment secret that only you as the repository owner can see:



Create Your Environment and Secret

Once you've created your repository, you need an environment to store your secret - click the Settings tab on the top right and choose the Environments option on the left hand menu:

Click the New environment on the resulting page, then name your environment and click the Configure environment button:


On the resulting page, scroll down to the Environment secrets section and click the Add environment secret button:

Name your secret, paste the Sfdx Auth Url value in the Value textbox and click the Add secret button.


Create and Execute Your Action

The final setup step is to create the YAML file for your action. This needs to live in the .github/workflows repository subfolder and can be called anything you like - I've gone for:

.github/workflows/renew.yaml

Once the file is present, clicking on the Actions tab will show the name of the action in the All workflows list on the left hand side - Renew Agentforce Dev Org Lease in this case.


Click on the name to see the run history and other options. There are no runs yet, but I've defined a workflow_dispatch event trigger so that I can run it on demand - in my experience this is well worth adding.


Start a run by clicking the Run workflow dropdown and the resulting Run workflow button:

I find that either the page doesn't refresh or I can't wait for that to happen, so I click the Actions tab again to see the In progress run:


Clicking on the run name gives me a little more detail:


And clicking the card shows me workflow steps - I've waited until they all completed successfully, so my run is green!


Switching out to my dev org, upon checking my user record I can see that there's an entry matching the action execution, although obviously it comes from a Github IP address in the US:


And that's it. As well as the workflow_dispatch event trigger, I've also specified a cron trigger so that it executes at 20:30 every Monday - more than every 45 days, but that gives me some wiggle room if my job starts failing and I don't get around to fixing it quickly.

More Information