Sunday, 6 August 2023

Salesforce CLI Open AI Plug-in - Function Calling

Image generated by Stable Diffusion online, based on a prompt by Bob Buzzard

WARNING: The new command covered in this blog extracts information from your Salesforce database and sends it to OpenAI in the US to provide additional grounding to a prompt - only use this with test/fake data, as there is no attempt at masking or restricting field access.


Back in June 2023, only a couple of months ago on the calendar but a lifetime in generative AI product releases, OpenAI announced the availability of function calling. Now that my plug-in is integrated with gpt-3.5+, this is now something I can use, but what value does it add? 

The short version - this allows the model to progress past it's training data and request more information to satisfy a prompt. 

The longer version. As we all know, the data used to train gpt-3.5 cut off at September 2021, so often the response to a prompt will warn you that things may have changed. With function calling, when you prompt the model you also tell it about any functions that you have available that it can use to retrieve additional information. If the functions don't help it will return the response as usual, but if they would it will return function calls for you to make and pass back to it. Note that the model doesn't call the functions, it tells you which functions to call and the parameters to pass and expects you to make the decision as to whether you should call them, which is as it should be.

The Plug-in Command

In the latest version (1.2.2) of my plug-in, there's a new command that gives the model access to a function to pull data from Salesforce if needed. The function simply takes a query and returns the result as a JSON formatted string:

const queryData = async (query: string): Promise<string> => {
  const authInfo = await AuthInfo.create({ username: flags.username });
  const connection = await Connection.create({ authInfo });
  const result = await connection.query<{ Name: string; Id: string }>(query);

  return JSON.stringify(result);

When the command is executed, the request to the Chat Completion API includes the prompt supplied by the user, and details of the function:

const functions: ChatCompletionFunctions[] = [
    name: 'queryData',
    description: 'A function to extract records from Salesforce via a SOQL query',
    parameters: {
      type: 'object',
      properties: {
        query: {
          type: 'string',
          description: 'The SOQL query to execute',
      required: ['query'],

Note that my function isn't targeting any specific objects, nor does it have any logic to figure out what might be needed based on the user's prompt - it simply executes a query and returns the results, which might be an error or an empty data set.

I execute this command as follows:

> sf bbai org data -t "Create an email introducing GenePoint, a new account we are tracking in Salesforce. Include the industry and number of employee details from our Salesforce database" -u

Note that I have to tell the prompt that the GenePoint account details can be found in our Salesforce database - if I don't do that it won't see any value in the function. I've also provided the description of a couple of fields that I want it to extract, and finally I've supplied the username that I'll connect to Salesforce with if the model asks me to run a query.

When the model responds, it will indicate if it wants me to execute the function by specifying the finish_reason as 'function_call', and adding the details in the function_call property of the message:

  index: 0,
  message: {
    role: 'assistant',
    content: null,
    function_call: {
      name: 'queryData',
      arguments: '{\n' +
        `  "query": "SELECT Industry, NumberOfEmployees FROM Account 
                     WHERE Name = 'GenePoint' LIMIT 1"\n`
  finish_reason: 'function_call'

In this case it wants me to call queryData with a query parameter of a SOQL query string that extracts details of the GenePoint account. I execute this, then add the results of the query to the prompt message and retry my request. This time I get a response with the email containing the details I wanted:

Subject: Introducing GenePoint - A Promising Addition to Our Portfolio

Dear Team,

I hope this email finds you well. I am excited to introduce you to our newest account, GenePoint, which we have recently started tracking in our Salesforce database. GenePoint is a biotechnology company that shows immense potential in its field. Allow me to provide you with some important details about this account.

Industry: Biotechnology

Number of Employees: 265


I think this is very cool because I haven't had to inspect the prompt in any way to decide to extract information from Salesforce. The model has been given a very basic function and knows (with some nudging, for sure) when it is appropriate to call it and, more importantly, the query that needs to be run to extract the details the user requested. 

Right now I've only told it about a single function, so it's either going to call that or nothing, but I can easily imagine a collection of functions that provide access to many internal systems. This allows the final request to be grounded with a huge amount of relevant data, leading to a highly accurate and targeted response.

Once again a reminder that this could result in sensitive or personally identifiable information being sent to the OpenAI API in the US to be processed, so while it's fun to try out you really don't want to go any where near your production data with this.

More Information

Sunday, 30 July 2023

Salesforce CLI Open AI Plug-in - Generating Records

Image generated by Stable Diffusion 2.1, based on a prompt from Bob Buzzard


After the go-live of Service and Sales GPT, I felt that I had to revisit my Salesforce CLI Open AI Plug-in and connect it up to the GPT 4 Large Language Model. I didn't succeed in this quest, as while I am a paying customer of OpenAI, I haven't satisfied the requirement of making a successful payment of at least $1. The API function I'm hitting, Create Chat Completion, supports gpt-3.5-turbo and the gpt-4 variants, so once I've racked up enough cost using the earlier models I can switch over by changing one parameter. My current spending looks like it will take me a few months to get there, but such is life with competitively priced APIs.

The Use Case

The first incarnation of the plug-in asks the model to describe Apex, CLI or Salesforce concepts, but I wanted something that was more of a tool than a content generator, so I decided on creating test records. The new command takes parameters listing the field names, the desired output format, and the number of records required, and folds these into the messages passed to the API function. Like the Completion API, the interface is very simple:

const response = await openai.createChatCompletion({
     model: 'gpt-3.5-turbo',
     temperature: 1,
     max_tokens: maxTokens,
     top_p: 1,
     frequency_penalty: 0,
     presence_penalty: 0,

result = ([0].message?.content as string);

There's a few more parameters than the Completion API:

  • model - the Large Language Model that I send the request to. Right now I've hardcoded this to the latest I can access
  • messages - the collection of messages to send. The messages build on each other, and each message has a content (the instruction/request) and a role (where the instruction is being sent). This allows me to separate the instructions to the model (when the role is assistant, I'm giving it constraints about how to behave) from the request (when the role is user, this is the task/request I'm asking it to carry out).
  • max_tokens is the maximum number of tokens (approximately 4 characters of text) that my request combined with the response can be. I've set this to 3,500, which is approaching the limit of the gpt-3.5 model. If you have a lot of fields you'll have to generate a smaller number of records to avoid breaching this. I was able to create 50 records with 4-5 fields inside this limit, but your mileage may vary.
  • temperature and top_p guide the model as to whether I want precise or creative responses.
  • frequency_penalty and presence_penalty indicate whether I want the model to continually focus on tokens if they are repeated, or focus on new information.

As this is an asynchronous API, I await the response, then pick the first element in the choices array. 

Here's a few executions to show it in action - linebreaks have been added to the commands to aid legibility - remove these if you copy/paste the commands.

> sf bbai data testdata -f 'Index (count starting at 1), Name (Text, product name), 
            Amount (Number), CloseDate (Date yyyy-mm-dd), 
            StageName (One of these values : Negotiating, Closed Lost, Closed Won)' 
            -r csv

Here are the records you requested
1,Product A,1000,2022-01-15,Closed Lost
2,Product B,2500,2022-02-28,Closed Won
3,Product C,500,2022-03-10,Closed Lost
4,Product D,800,2022-04-05,Closed Won
5,Product E,1500,2022-05-20,Negotiating

> sf bbai data testdata -f 'FirstName (Text), LastName (Text), Company (Text), 
                            Email (Email), Rating__c (1-10)' 
                            -n 4 -r json

Here are the records you requested
    "FirstName": "John",
    "LastName": "Doe",
    "Company": "ABC Inc.",
    "Email": "",
    "Rating__c": 8
    "FirstName": "Jane",
    "LastName": "Smith",
    "Company": "XYZ Corp.",
    "Email": "",
    "Rating__c": 5
    "FirstName": "Michael",
    "LastName": "Johnson",
    "Company": "123 Co.",
    "Email": "",
    "Rating__c": 9
    "FirstName": "Sarah",
    "LastName": "Williams",
    "Company": "Acme Ltd.",
    "Email": "",
    "Rating__c": 7

There's a few interesting points to note here:

  • Formatting field data is conversational - e.g. when I use Date yyyy-mm-dd the model knows that I want the date in ISO8601 format. For picklist values, I just tell it 'One of these values' and it does the rest.
  • In the messages I asked it to generate realistic data, and while it's very good at this for First Name, Last Name, Email, Company, it's not when told a Name field should be a product name, just giving me Product A, Product B etc.
  • It sometimes takes it a couple of requests to generate the output in a format suitable for dropping into a file - I'm guessing this is because I instruct the model and make the request in a single API call.
  • I've generated probably close to 500 records while testing this, and that has cost me the princely sum of $0.04. If you want to play around with the GPT models, it really is dirt cheap.
The final point I'll make, as I did in the last post, is how simple the code is. All the effort went into the messages to ask the model to generate the data in the correct format, not to include additional information that it was responding to the request, to generate realistic data. Truly the key programming language for Generative AI is the supported language that you speak - English in my case!

As before, you can install the plug-in via :
> sf plugins install bbai
or if you have already installed it, upgrade via :
> sf plugins update

More Information

Saturday, 22 July 2023

Salesforce GPT - It's Alive!

Image generated by Stable Diffusion 2.1 in response to a prompt by Bob Buzzard


This week (19th July) the first of the Salesforce AI Cloud/Einstein GPT applications became generally available. You can read the full announcement on the Salesforce news site, but it's great to see something tangible in the hands of customers after the wave of marketing over the last few months. Its a relatively limited amount of functionality to start with, but I prefer that to waiting another 9 months for everything to be fully built out. GA is better than finished in this case!

What we know so far

We know that it's only available to Unlimited Edition, which already includes the existing Einstein AI features. This seems to be becoming the standard approach for Salesforce - Meetings, for example, was originally Performance and Unlimited Edition only, but is now available for all editions with Sales Cloud. It's a good way of keeping the numbers down without having to evaluate every application, and it's likely to include those customers that are running a lot of their business on Salesforce and thus will get the most value. 

We know that it's (initially) a limited set of features that look to be mostly relying on external generative AI systems rather than LLMs trained on your data. The features called out in the announcement include:

  • Service replies - personalised responses grounded in relevant, real time data sources. To be fair, this could be a model trained on your data, but the term grounded implies that it's an external request to something like Open AI GPT with additional context pulled from Salesforce.
  • Work Summaries - wrap ups of service cases and customer engagements. The kind of thing that Claude from Anthropic is very good at. These can then be turned into Knowledge Articles, assuming there was anything that could be re-used or repurposed for future customer calls.
  • Sales Emails - personalised and data-informed emails created for you, again sounding very much like a grounded response from something like OpenAI.
This looks like a smart move by Salesforce, as they can make generative AI available to customers without having to build out the infrastructure to host their own models - something that might present a challenge, given the demand for GPUs across the industry.

We know it will include the Einstein GPT Trust Layer. This is probably the biggest benefit - you could create your own integration with any of these external services, but you'd have to build all the protections in yourself, and the UI that allow admins to configure them.

We don't know what pricing to expect when it becomes available outside of Unlimited Edition, but given that it's included with Einstein there, it may well be included in that SKU for Enterprise Edition, which is $50/user/month for each of Sales and Service Cloud Einstein.

We know it includes "a limited number of credits', which I'm pretty sure was defined as 8,000 in one of the webinars I watched. This sounds like a lot, but we don't know what a credit is, so it might not be. If it's requests, that is quite a few. If it's tokens, not so much - testing my Salesforce CLI integration with OpenAI used around 6,000 tokens for not very many requests. Still, if you built your own integration with any of these tools you'd have to pay for usage, so there's no reason to expect it to be included when going via Salesforce, especially as I'm sure different customers will have wildly different usage. Those 6,000 tokens also cost me around 12 cents, so hopefully purchasing extra credits won't break the bank!

We also know, based on the Service Cloud GPT webinar on 19th July, that we'll be able to add our own rules around PII/sensitive data detection in prompts. It seemed highly likely that would be the case, but good to have it confirmed.

Finally, we know this is just the beginning of the GPT GAs. Dreamforce is less than 2 months away and there will be some big AInnouncements for sure.]

More Information

Sunday, 16 July 2023

Salesforce CLI OpenAI Plug-in

Image generated using StableDiffusion 2.1 via


I've finally found time to have a play around with the OpenAI API, and was delighted to find that it has a NodeJS library, so I can interact with it via Typescript or JavaScript rather than learning Python. Not that I have any issue with learning Python, but it's not something that is that useful for my day to day work and thus not something I want to invest effort in right now.

As I've said many times in the past, anyone who knows me knows that I love a Salesforce CLI Plug-in, and that is where my mind immediately went once I'd cloned their Github repo and been through the (very) quick overview. 

The Plug-In

I wasn't really interested in surfacing a chat bot via the Salesforce CLI - for a start, it wouldn't really add anything over and above the public OpenAI chat page

The first area I investigated was asking it to review Apex classes and call out any inefficiencies or divergence from best practice. While this was entertaining, and found some real issues, it was also wildly inaccurate (and probably something my Evil Co-Worker will move forward with to mess with everyone). 

I then built out a couple of commands to generate titles and abstracts for conference sessions based on Salesforce development concepts, but as the Dreamforce call for papers is complete for this year, that didn't seem worth the effort.

I landed on a set of explainer commands that would ask the AI model to explain something to help a more junior developer:

  • apex - explain a topic in the context of the Apex programming language
  • cli - explain a Salesforce CLI command, along with an example
  • salesforce - explain a Salesforce topic using appropriate language for a specific audience - admins, devs, execs etc.
Accessing the OpenAI API is very straightforward:

Install the OpenAI Package

> npm install openai

Create the API instance
    import { Configuration, OpenAIApi } from 'openai';

    const configuration = new Configuration({
        apiKey: process.env.OPENAI_API_KEY,
    const openai = new OpenAIApi(configuration);
Note that the API Key comes from the OPENAI_API_KEY environment variable - if you want to try out the plug-in yourself you'll need an OpenAI account and your own key. 

Create the Completion

This is where the prompt is passed to the model so that it can generate a response:
    const completion = await openai.createCompletion({
        model: 'text-davinci-003',
        max_tokens: maxTokens,

      result = ([0].text as string);
Note that the completions API is legacy and received it's final update in July 2023. I chose this simply because it's cheap. When you are writing a new plug-in connecting to a third party system you get a lot of stuff wrong and I was keen not to rack up too large a bill while I was learning! The Chat Completions API is slightly different, in that it receives a list of messages and can call functions with results. It's not dramatically different though, so I felt anything I learned applied to both.

I moved this into a shared function, hence most of the call is parameterised. The createCompletion function can take many parameters, but here's the explanation of those that I've used:
  • model - the large language model to use
  • max_tokens - the maximum number of tokens to generate the response. A token is typically thought to be around 4 characters, so will sometimes represent words and sometimes parts of words. Input and output token counts are how you get charged for API use, so you want to keep them down where you can. The function defaults to 16 which in practice is fairly useless for a response to a human, so I've typically upped it to 256
  • prompt - the question/task that I've set the model
  • temperature - a value between 0 and 2.0 that indicates how accurate a response I want. I find that 0.6 works pretty well. If you change this, do it in small increments, as I asked it to review an Apex class with a temperature of 1.5 and got pretty much gibberish.
And that's about it. The rest of the code is available at the Github repository - it's mostly standard CLI Plug-In stuff of handling flags and displaying the response.

The Plug-In In Action

Once I've installed the plug-in from NPM, using the following command :

> sf plugins install bbai

I can ask for some explanations. First, explain the Salesforce concept of groups. As I don't provide details of the audience, it defaults to programmers:

> sf bbai explain salesforce -t groups

Here are the results

Groups in Salesforce are a way to organize and manage users in the system. They allow you to categorize users based on their roles and responsibilities, allowing for more streamlined access to information. For example, you can create a group for all the programmers within your organization, allowing them to access the same resources, documents, and tasks. Additionally, groups can be used to assign tasks, send messages, and create reports. By creating groups, you can quickly and easily manage permissions, tasks, and data access for your programming team.

Now the same command for an exec audience - notice the difference?

> sf bbai explain salesforce -t groups -s executive

Here are the results

Groups in Salesforce allow organizations to manage and control access to records and data. Groups are used to assign users to roles within the organization and to grant them access to the appropriate records. By using groups, organizations can easily assign the same set of permissions to multiple users, allowing them to quickly and easily manage access rights. This makes it easier to manage user permissions, and also helps to ensure that users only have access to the data and records they need to do their job. Groups also streamline the process of granting and revoking access rights, making it easier for organizations to stay secure and compliant.

And now I want to start working with some of these concepts, but I don't understand how to get my code into the instance via the CLI:

sf bbai explain cli -c "source push"             

Here are the results

Salesforce CLI command source push is used to deploy source from the local environment (e.g. a developer’s machine) to a Salesforce org. This command is useful in a DevOps process, as it enables developers to quickly and easily deploy their code changes to a Salesforce org for testing and deployment.

Example Execution: 

sfdx force:source:push --targetusername devorg --sourcepath /path/to/source/directory/

The command above will deploy all source code in the specified directory to the org with the username devorg.


Something that really stood out in this was the simplicity of the integration - it's literally a few lines of code. Which is as it should be - with generative AI, English is the programming language so most of the effort should go into the prompt rather than getting set up to send the prompt. Looking ahead a few years, I can see this being a game changer for integrating systems. Rather than starting with a book of API specifications that you need to adhere to, there will be a single entry point where you send conversational requests to a model trained on the specific data. "Return the id and all name and address fields of all the contacts we have that work for financial services customers in JSON format", without needing to develop that specific interface.

The Cost

This was top of mind for me during this experiment. I've seen situations in the past where request based charging really racked up during testing. OpenAI allows you to set a hard and soft limit, so that you can't get into that situation, but it also charges by the token for both input and output. When you aren't that familiar with how many tokens might be in a 2-3 line prompt. After building my plug-in, with the usual number of mistakes, and testing it for several hours, I'd used the princely sum of $0.16. 

While that might sound like there's nothing to worry about, but the latest models are more expensive per token and can handle significantly more tokens both for input and output, so you may find that things stack up quickly. I've seen news stories lauding the ability to send an entire book to a generative AI as context, but no mention of how much you'll be charged to do that!

More Information

Sunday, 9 July 2023

Salesforce World Tour London : AI Day

AI Day London

A steampunk image of a machine with a brain.

Image created using Stable Diffusion 2.1

It's been just over a week since Salesforce World Tour : AI Day, and I'll wager less than two weeks since the events team found it it was going to be rebranded AI Day! Kudos to them though, it was seamless and if you weren't following the tech news you'd never have guessed!

Those of us attending the UK event experienced great timing this year. The fact that it happened a few weeks after the launch of AI Cloud in New York means that cutting edge features have just reached demo state well outside of the usual release cycle. For example, we got the first public demo of Prompt Studio, which is the kind of thing that would typically land at Dreamforce in San Francisco, rather than a World Tour event on the other side of the world. I'm also fairly sure that we wouldn't have been treated to a deep dive in machine learning from Patrick Stokes if he wasn't leading the AI technical charge. 

Spare a thought for the poor execs who are suddenly having to trip out a lot more than usual at this time of year, but if you want to position yourself as a leader in the AI space, you need some representation from Head Office. Bad news for them, but good news for us.

Prompt Studio looks good from what we saw in a short demo in a keynote. As expected, Salesforce are taking a cautious approach with guardrails around not just the AI, but who can create the prompts that drive the AI. There's some chatter on the socials that every admin is now a prompt engineer, but that feels like a simplistic view to me. Marketing and Legal are a couple of departments that will be very interested in contributing to and signing off prompts, rather than viewing it as regular configuration. Any admin who does become a prompt engineer could be looking at a rather lucrative career though, as that role is currently paying up to $335k/year, although it isn't expected to be around that long.

We also saw several other demos of GPT-as-an-Assistant across Sales, Service, Slack and Platform. This last one was particularly interesting for me, covering as it did automatic generation of Apex code from descriptive comments. This represents a significant change for us developers from the usual enhancements on the development side - typically we are keen to learn the detail of the new features so that we can start customising Salesforce with them. With GPT it's closer to handing off our work to a junior colleague - explaining what we want, waiting for it to appear and then reviewing it to see how well it matches our requirements. And, if history is anything to go by, realising that there was a whole layer of complexity that we forgot to mention!

One constant attendee at the London events is Kavindra Patel, VP of Trailblazer Engagement at Salesforce. I always try to catch up with him, and this year was no different. In a shocking turn of events, he let me in on the secret that there's quite a focus on AI at Salesforce right now!

And, of course, we had the Golden Hoodie award. This year it went to my fellow MVP and co-organiser of the London Salesforce Developer group, Todd Halfpenny, seen here preparing for a duel with Zahra Bahrololoumi CBE, CEO of Salesforce UK&I.

You can catch up with the World Tour keynote, and various other sessions, on Salesforce+. If you weren't able to attend in person I'd highly recommend it, as it is a good overview of where Salesforce will be in a few months once it's all built out. If you are interested in AI in general or AI Cloud in particular, but not sure where to start, join me on 19th July when I'll be doling out advice about preparing for the rise of the machines. 

Additional Information

Tuesday, 4 July 2023

Predictions for Digital Life in 2035

Predictions for Digital Life in 2035

Pew Research Center recently published the results from their 16th "Future of the Internet" research that aggregates experts' opinions on important digital issues. 

As you can imagine much of the focus was on Generative and other Artificial Intelligence applications. A fair number of the respondents were enthusianxious about the changes, with 42% being equally excited and concerned about what they expect to see, 37% being more concerned than excited. and only 18% more excited than concerned. Clearly the pessimists are currently in charge with a whopping 79% being as-or-more-concerned than excited. 

The concerns will again come as no surprise to anyone who has been following the AI news and opinion over the last year or two:

  • That the motivation will be for profit and power, leading to data collection aimed at controlling or coercing behaviour with ethics as an afterthought
  • Loss of privacy and jobs, leading to a rise in poverty and reduction in dignity
  • Human knowledge drowning in an ocean of meaningless or flat out wrong information generated by or using AI
  • Health impact as tech encourages us to become even more isolated from humans, or feeds our worst paranoia
  • Government and regulation being unable to keep up with the pace of change, with an end game of autonomous weapons and cyber warfare being waged by machines without human oversight. Essentially the first steps towards Terminators.
It's not all bad news though, with benefits expected to include:
  • Enhancements in healthcare, education, fitness, nutrition, entertainment, transportation and energy. Our digital assistants will free us up from much of the drudgery, leaving us more time to enjoy these improved offerings, which will of course be entirely integrated and friction-free.
  • Increased amplification when people speak up for their human rights, and easy collaboration with others who wish to mobilise to demand the same. Access to data and better communication tools will help people live better and safer lives around the globe.
  • Improved digital literacy, with the desire that this will see the return of trusted news and information sources. Failing that, at least an assurance that information is factual and verified.
  • A regulatory environment that promotes pro-social activities and minimises anti-social ones. 
One thing that jumps out at me from the list of negatives and positives is how much more detailed the Orwellian nightmares are - many of those that are concerned have really thought those concerns all the way through to the bitter end!

It's a relatively lengthy report at 230 odd pages, but an interesting read, and at times entertaining (especially the more extreme views, as always).  An awful lot of those canvassed expect most technological innovations to be a double-edged sword - Howard Rheingold, author of "The Virtual Community", advises us to ask of any new technology 'What would 4chan do with it?'. Jonathan Grudin, Affiliate Professor of Information Science at the University of Washington, paints a picture of "a Sorcerer’s Apprentice’s army of feverishly acting brooms with no sorcerer around to stop them." as the sheer scale of digital activities, and the incredible speeds at which they are carried out, far outpaces our ability to verify and correct, so we stand by helplessly and watch.

Louis Rosenbery, CEO and Chief Scientist at Unanimous AI, predicts that we'll be in full Star Trek mode by 2035, as keyboards, mice, touchscreen input and flat screen display output are swept aside by conversational interfaces. Like Captain Kirk, we'll just ask the computer to carry out a task in plain language (although for some reason Kirk still needed humans to change the velocity and direction of the Enterprise, and patch through a call, so maybe our jobs are safe after all).

If you have a couple of hours to spare, it's well worth a read. My main takeaway is that even those who are positive are sounding a note of caution equally loudly, and I think that's an appropriate view. It dovetails nicely with most of the work that I do in my professional life - I'm not so much looking for things that will work correctly, as a lot of that is fairly obvious. Instead I'm looking for problems - what won't work, are there unintended side effects, is it future-proof, will it scale? It's easy to get caught up in the hype and hoopla of an emerging technology like Generative AI, and let the FOMO trick you into doing something rash. With any new technology, always take a step back, a deep breath, and evaluate with cold eyes.

Will 2035 be digital heaven or digital hell? I think, like always, it will be six of one, half a dozen of the other - a constant battle that leads to an uneasy balance.

Additional Information

Sunday, 14 May 2023

Light DOM Scoped Slots in Summer 23


It's that time again - another Salesforce release is a few weeks away, the preview release notes are out, and preview scratch orgs can be spun up to try out some of the new features. There's a few changes around Lightning Components in the upcoming release, and the first one that caught my eye was the concept of scoped slots, made possible by the general availability of the light DOM.

The light DOM is what we used to call the DOM before web components came along - the standard Document Object Model representing the structure of a web page that is visible and accessible to any JavaScript that cares to look. The light aspect comes from the fact that web components by default use shadow DOM - a hidden DOM structure attached to the regular DOM that only the component can see. 

Scoped slots are an inversion of the usual slot behaviour seen with Lightning Web Components. The standard case allows a parent component to pass markup into a child component to render in specific locations. Scoped slots turns this on its head and allows child components to pass information up to parent components that can be rendered in a specific slot. All of which sounds very cool, but at first glance it seemed a solution looking for a problem. The example in the release notes didn't do much to change this view, as they showed a child iterating a list and the parent rendering the contents of the list item. Given that the parent can iterate the list just as easily as the child, I couldn't see the value that was added.

One thing I've found with Lightning Web Components is a lot of the new features have equivalents in another JavaScript framework - Vue.js seems to be the prime candidate at the moment, and sure enough there's the concept of scoped slots there. Unfortunately the various blogs that I read on this topic didn't help my understanding enormously - when you aren't familiar with the framework then the examples aren't always helpful. What I did take away from this is the value in scoped slots is to separate the generation of the data from the rendering, so that it is available for re-use. The release notes example didn't really provide this as the list iteration is the same regardless of whether you do it in the parent or child, so I needed to figure out another use case.

The Sample

Simple list iteration is easy, but what about the case where I have a JavaScript object created from an Apex Map and want to iterate that? I can't do this in markup, instead I have to create a list of properties from the object. This feels like a situation where a child component that can handle any conversion and iteration required and simply send the properties back to the parent. 

My component that handles this is called mapIterator, and when it receives an object via an @api property, it creates a list from it:

set objMap(value) {
    if (this._objMap) {
        for (let key in this._objMap) {

and the HTML markup simply iterates this and makes each entry available to the parent via the lwc:slot-bind directive, remembering to render in light DOM mode:

<template lwc:render-mode="light"> 
    <template for:each={values} for:item="value">
        <slot key={value} lwc:slot-bind={value}></slot>

I have several parent components that make use of this - one to generate a simple list of accounts, one to generate a list of account cards, and one to generate a list of opportunity cards. The mapIterator provides the conversion and iteration of the object properties to each of this with no changes required. You can find all of these at the Github repository, but here's the markup from the simple list :

<c-map-iterator obj-map={accountsMap}>
    <template lwc:slot-data="value">
        <div class="slds-p-left_small slds-p-bottom_xx-small">
            <strong>ID:</strong> {value.Id}
        <div class="slds-p-left_large slds-p-bottom_small">

Access to the element from the iterator is provided by the lwc:slot-data directive, and as you can see the parent handles all the presentation side of things.


There is definite value here in separating the conversion/iteration from the rendering of the content. That said, I think you might run into issues if you are rendering the iterated elements using markup that must be a direct descendant of a containing element. In that case you won't be able to separate everything, as you get the child element markup in the way. I think there you'd likely need specialisations of the iterator that also renders the container, which won't feel quite as clean.

More Information