Tuesday, 7 October 2025

Agentforce Vibes - First Look, Data Model

  Image created by ChatGPT 5 based on a prompt from Bob Buzzard 


Introduction

We all knew this was coming, right? Salesforce has long considered itself the cool kid in enterprise technology, so they were always going to jump on the vibe coding bandwagon. After reading the Salesforce Developers blog post on Agentforce Vibes I was keen to give it a go. 

I took the approach that I wanted the Agent to be truly autonomous, so my plan was to agree with everything it wanted to do, and then once everything was deployed to my org, I'd try it out and review everything at that point. This is how I'd work with a human junior assisting me, although I'd obviously be available to talk through their ideas if they wanted, which agents typically don't need.

Setup

Setup was as easy as it gets. I'm using VS Code and simply by switching to the Agent Dev view we were off and vibing.


I spun up a scratch org, activated the MCP server for the Salesforce CLI, and then tried to figure out what to do with it.

What to Vibe Out

I didn't just want to vibe some straightforward additional Apex to an existing code repository, as I'm sure that's one of the smoke tests of this new functionality. If it can't do that, it's going to be a tough time on the socials for Salesforce. The part of application development I've always wanted to speed up, especially for my side projects, is creating the data model and permission sets. Doing this directly through XML metadata is quite error-prone, and doing it through Setup takes a while. As a coder, this is typically a task I just want out of the way so I can start cutting some Apex. 

I decided to give it the kind of task that I was intimately familiar with, as I'd given it to many graduates back in my BrightGen days. The concept is an onboarding application, with a bunch of templated journeys broken down into steps that can be instantiated and assigned to a new joiner, with a specified start date and manager etc. There's a bunch of requirements around calculating completion dates and current state that require roll up summaries and formula fields, so it's a good introduction to data modelling for those new to Salesforce.

I created a prompt of around 130 words that covered the key concepts in natural language. I avoided giving any clues, so rather than talking about roll up summaries and master details, I used phrases like "this is calculated from the max values in the steps for the journey". Probably quite close to the real instructions that I gave humans.

I gave the agent the prompt and asked it to generate a plan, which it did. 

The plan was frankly excellent.

It had picked up all the nuance of the requirements - identified where Master-Detail relationships were required, understood that templates were separate to the actual journeys and needed to be modelled as their own objects, and came up with recommendations around security and deployment. It also suggested a bunch of extra fields, permission sets, and a flow to create journeys from templates. Most of this was later tasks for the grads so I told it to skip those. I then signed off on the plan and sat back to watch the agent at work.

Creating the Metadata

One thing I found a little tedious was the agent wanted me to okay every file before creating it, even though I'd ticked the box for auto-approve. I didn't check any further into this, so it's possible there's another setting I needed to look at. I typically don't review people's work piecemeal as they create individual components, so I just okayed them all straight away. What I saw as they were being generated looked plausible, and after about 10 minutes it had completed all the work. So I asked to to deploy its work to my scratch org.

Deploying the Files (or where it all went awry)

The initial attempt at deployment threw up an error that you can't specify both the apexTests and apexTestLevel parameters. Slightly unexpected, but it was easily able to move on. 

The next attempt threw up a few errors:

  • The agent had used <picklist> instead of <valueSet> which wasn't compatible with the metadata API version I'd specified. Slightly unexpected again, as I'd been asked which API version I wanted to generate the metadata for, but again something easily fixed.
  • It had set a Private sharing model for an object on the detail side of a Master-Detail relationship. It turns out this was for the sharing model and the external sharing model, which caused problems in later attempts as it changed one but not the other.
  • The roll up summary metadata fields weren't correct, so the agent suggested changing them. It turned out that the suggested new fields were no more valid (<summaryTable>).
As the agent was in charge, I okayed all of its suggested changes and it tried again. We then entered a doom cycle of attempting to deploy, getting errors, and highly variable suggestions for fixes. 

I think that the agent wanted to apply fixes for every error in a one go, which isn't always the best approach with deployments, as one error can cascade into a lot of failures. My approach is to fix errors one at a time and retry the deployment, so that I have a handle on what I've changed and what difference it made. The agent would want to change the metadata to fix every error at once, even when the underlying error was a custom object failing to deploy. 

My favourite was where a parent object in a relationship couldn't deploy because the roll up summary metadata was wrong, which threw an error on the child object. The agent felt that this was an issue with the child object and the case was the relationship being Master-Detail. It changed the relationship  to a Lookup field, but sadly it left the roll up summary metadata in place, thus finding more errors at the next deployment.

After a couple of attempts the agent had used up all my requests and switched me to the core model. I wondered if this might be better, given that it's a Salesforce hosted (and presumably trained) model, but sadly that wasn't the case. If I was paying for requests to be burned by something that wasn't even following documented metadata standards, I'd likely be a little miffed.

Eventually the agent proudly announced that it had completed the deployment, even though I could see the request had failed.



Creating and Deploying Permission Sets (or the Folie à Deux


Checking the org also confirmed nothing had been deployed, but this is vibe coding where the facts don't matter and the agents are in charge, so I feigned ignorance and asked it to now create some permission sets for me - an admin and a manager, obviously giving quite a lot of detail.

Again, the plan here was excellent - it understood my prompt, picked out the nuance and generated plausible files. The agent had clearly been emboldened by how easily I was tricked into believing the deployment was successful, and jumped straight to it. This time I decided I couldn't continue to enable its flights of fancy and called it out. It folded like a cheap suit. 



This was comfortingly familiar - often ChatGPT and others give me completely incorrect code and when called out on it, fess up immediately. It didn't offer to fix it again though, just told me it was sorry, the system was broken, and what needs fixing. Vibe Confessions.

At this point I took over and fixed the errors - <writable> instead of <editable> for a custom field in the permission set was the most egregious, in case you were wondering. 

But Seriously Folks!


It's easy to mock Agentforce Vibes (I mean, just look at what it was doing - this stuff writes itself!), but the issues I've identified will be easily fixed. It reminds me of the first release of Agentforce for Developers - the test class code generated by that was fairly awful, but it wasn't long before it was quite reliable. If we didn't have Dreamforce '25 starting in a week, I'm pretty sure we wouldn't be seeing this yet. I guess that data model metadata might also not be its strong suit, but it's GA and it understood the ask, so I think it's fair enough to call out the performance.

So for this, admittedly slightly complex, data modelling task, Agentforce Vibes is top tier for planning, but decidedly middling for execution. You'll need to be experienced with Salesforce metadata to guide it through generating the correct metadata, or fixing it yourself. In terms of generating a list of tasks to carry out in the UI, it was pretty amazing, it just wasn't great at handling those tasks itself in metadata. 

While the above probably reads as somewhat negative (and yes, snarky!), that isn't really the case. Using Agentforce Vibes was way faster than trying to create this all myself.  Either via XML or through the UI. I probably had it all in my scratch org with appropriate permission sets in around 60-90 minutes. The caveat here is that if I didn't have my many years of experience working with metadata, I doubt I'd have got it deployed at all. 

Once the execution catches up with the planning, which I'm sure won't be that long, it will be a different story and a really helpful sidekick.

This is only my first look - I'll be back to this scenario to vibe code some flow, lightning component and asynchronous Apex and I'll keep you updated on how I got on.

Related Information