Image generated using StableDiffusion 2.1 via
https://huggingface.co/spaces/stabilityai/stable-diffusion
Introduction
I've finally found time to have a play around with the OpenAI API, and was
delighted to find that it has a
NodeJS library, so I can interact with it via Typescript or JavaScript rather than learning
Python. Not that I have any issue with learning Python, but it's not something
that is that useful for my day to day work and thus not something I want to
invest effort in right now.
As I've said many times in the past, anyone who knows me knows that I love a
Salesforce CLI Plug-in, and that is where my mind immediately went once I'd
cloned their
Github repo
and been through the (very) quick overview.
The Plug-In
I wasn't really interested in surfacing a chat bot via the Salesforce CLI -
for a start, it wouldn't really add anything over and above the public OpenAI
chat page.
The first area I investigated was asking it to review Apex classes and call
out any inefficiencies or divergence from best practice. While this was
entertaining, and found some real issues, it was also wildly inaccurate (and
probably something my Evil Co-Worker will move forward with to mess with
everyone).
I then built out a couple of commands to generate titles and abstracts for
conference sessions based on Salesforce development concepts, but as the
Dreamforce call for papers is complete for this year, that didn't seem worth
the effort.
I landed on a set of explainer commands that would ask the AI model to explain
something to help a more junior developer:
-
apex - explain a topic in the context of the Apex programming
language
- cli - explain a Salesforce CLI command, along with an example
-
salesforce - explain a Salesforce topic using appropriate language
for a specific audience - admins, devs, execs etc.
Accessing the OpenAI API is very straightforward:
Install the OpenAI Package
>
npm install openai
Create the API instance
import { Configuration, OpenAIApi } from 'openai';
const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);
Note that the API Key comes from the OPENAI_API_KEY environment variable - if
you want to try out the plug-in yourself you'll need an OpenAI account and
your own key.
Create the Completion
This is where the prompt is passed to the model so that it can generate a
response:
const completion = await openai.createCompletion({
model: 'text-davinci-003',
max_tokens: maxTokens,
prompt,
temperature
});
result = (completion.data.choices[0].text as string);
Note that the completions API is legacy and received it's final update in July
2023. I chose this simply because it's cheap. When you are writing a new
plug-in connecting to a third party system you get a lot of stuff wrong and I
was keen not to rack up too large a bill while I was learning! The Chat
Completions API is slightly different, in that it receives a list of messages
and can call functions with results. It's not dramatically different though,
so I felt anything I learned applied to both.
I moved this into a shared function, hence most of the call is parameterised.
The createCompletion function can take many parameters, but here's the
explanation of those that I've used:
- model - the large language model to use
-
max_tokens - the maximum number of tokens to generate the response.
A token is typically thought to be around 4 characters, so will sometimes
represent words and sometimes parts of words. Input and output token
counts are how you get charged for API use, so you want to keep them down
where you can. The function defaults to 16 which in practice is fairly
useless for a response to a human, so I've typically upped it to 256
- prompt - the question/task that I've set the model
-
temperature - a value between 0 and 2.0 that indicates how accurate
a response I want. I find that 0.6 works pretty well. If you change this,
do it in small increments, as I asked it to review an Apex class with a
temperature of 1.5 and got pretty much gibberish.
And that's about it. The rest of the code is available at the
Github repository - it's mostly standard CLI Plug-In stuff of handling flags and
displaying the response.
The Plug-In In Action
Once I've installed the plug-in from
NPM, using
the following command :
>
sf plugins install bbai
I can ask for some explanations. First, explain the Salesforce concept of
groups. As I don't provide details of the audience, it defaults to
programmers:
> sf bbai explain salesforce -t groups
Here are the results
Groups in Salesforce are a way to organize and manage users in the
system. They allow you to categorize users based on their roles and
responsibilities, allowing for more streamlined access to information. For
example, you can create a group for all the programmers within your
organization, allowing them to access the same resources, documents, and
tasks. Additionally, groups can be used to assign tasks, send messages,
and create reports. By creating groups, you can quickly and easily manage
permissions, tasks, and data access for your programming team.
Now the same command for an exec audience - notice the difference?
> sf bbai explain salesforce -t groups -s executive
Here are the results
Groups in Salesforce allow organizations to manage and control access to
records and data. Groups are used to assign users to roles within the
organization and to grant them access to the appropriate records. By using
groups, organizations can easily assign the same set of permissions to
multiple users, allowing them to quickly and easily manage access rights.
This makes it easier to manage user permissions, and also helps to ensure
that users only have access to the data and records they need to do their
job. Groups also streamline the process of granting and revoking access
rights, making it easier for organizations to stay secure and
compliant.
And now I want to start working with some of these concepts, but I don't
understand how to get my code into the instance via the CLI:
> sf bbai explain cli
-c "source push"
Here are the results
Salesforce CLI command source push is used to deploy source from the
local environment (e.g. a developer’s machine) to a Salesforce org. This
command is useful in a DevOps process, as it enables developers to quickly
and easily deploy their code changes to a Salesforce org for testing and
deployment.
Example Execution:
sfdx force:source:push --targetusername devorg --sourcepath
/path/to/source/directory/
The command above will deploy all source code in the specified directory
to the org with the username devorg.
Conclusions
Something that really stood out in this was the simplicity of the integration
- it's literally a few lines of code. Which is as it should be - with
generative AI, English is the programming language so most of the effort
should go into the prompt rather than getting set up to send the prompt.
Looking ahead a few years, I can see this being a game changer for integrating
systems. Rather than starting with a book of API specifications that you need
to adhere to, there will be a single entry point where you send conversational
requests to a model trained on the specific data. "Return the id and all name
and address fields of all the contacts we have that work for financial
services customers in JSON format", without needing to develop that specific
interface.
The Cost
This was top of mind for me during this experiment. I've seen situations in
the past where request based charging really racked up during testing. OpenAI
allows you to set a hard and soft limit, so that you can't get into that
situation, but it also charges by the token for both input and output. When
you aren't that familiar with how many tokens might be in a 2-3 line prompt.
After building my plug-in, with the usual number of mistakes, and testing it
for several hours, I'd used the princely sum of $0.16.
While that might sound like there's nothing to worry about, but the latest
models are more expensive per token and can handle significantly more tokens
both for input and output, so you may find that things stack up quickly. I've
seen news stories lauding the ability to send an entire book to a generative
AI as context, but no mention of how much you'll be charged to do that!
More Information