Saturday, 11 November 2023

OpenAI GPTs - Meet Bob Buzzard 2.0



Introduction


During OpenAI DevDay, the concept of custom GPTs was launched - Chat GPT with a bunch of preset instructions to target a specific problem domain, additional capabilities such as browsing the web, and extra knowledge in terms of information that may not be available on the web. 

In order to create and use GPTs, you need to be a ChatGPT Plus subscriber at $20/month, although in the UK there's VAT to be added so it works out around £20/month. This also gives priority access to new features, the latest models and tools, faster response times and access even at peak times. I signed up just to try out GPTs though, as they looked like a world of fun.

The Replicant


My first custom GPT is my replicant - Bob Buzzard 2.0. A GPT that has been pointed at most of my public and some of my private information. Instructed to respond as I would, you can expect irreverent or sarcastic responses as the mood takes (AI) me. Obviously very focused on Salesforce, and keen on Apex code. 

Right now you'll need to be a ChatGPT Plus user to access custom GPTs, but if you are you can find Bob Buzzard 2.0 at : https://chat.openai.com/g/g-DOVc9phwC-bob-buzzard-2-0  Here's a snippet of a response from my digital twin regarding the impact of log messages on CPU - something I've investigated in detail in the past :


Creating GPTs


This is incredibly simple - you just navigate to the create page and tell it in natural language how you want it to behave, define the skills, point it at additional web sites or upload additional information. It's easy and requires no technical knowledge, which does make me wonder why they announced it at developer day given there's no development needed, but lets not tilt at that windmill.

A Couple of Warnings


First, remember that any private information that you upload to a GPT won't necessarily remain private. If you don't instruct your custom GPT to keep instructions and material private, it will happy share them on request. 

Second, I've given the replicant a mischievous side - from time to time it will just gainsay your original decisions when you ask for help with specific problems, maybe suggesting you have picked the wrong Salesforce technology, or telling you to bin it all off and use another vendor. Think of this as your reminder that a human should always be involved in any decision making based on advice from AI.

I'm Going to be Rich?


Something else that was announced at Developer Day was revenue sharing - if people use Bob Buzzard 2.0 I'll get a slice of the pie. So does this mean I'm going to be rich? Like always, almost certainly not. As you just click a button and answer questions to create a GPT, there will be millions of them before too long. They are so easy to create that something a service like Salesforce development advice, with the vast amount of content already in the public domain, will be extremely competitive - an extremely crowded marketplace of similar products means everybody earns nothing.

That said, I think this is something that genuine creatives will be able to earn with. Rather than having their work used to train models that are can then be used to produce highly derivative works for close to free, they can create their own GPT and at least stand a chance of getting paid. Whether the earnings will be worth it we don't yet know, although history suggests the platform providers will keep everything they can.  

Saturday, 4 November 2023

The Einstein Trust Layer must become the Einstein Trust Platform

Image from https://www.salesforce.com/news/stories/video/explaining-the-einstein-gpt-trust-layer/


Introduction


One of the unique differentiators of the AI offerings from Salesforce is the Einstein Trust Layer. Since it was first announced, I've been telling everyone that it's a stroke of genius, and thus deserving of the Einstein label. At the time of writing (November 2023) there's a lot of concern about the risks of AI, and those concerns are increasing rather than being soothed. Just this week the UK hosted an AI Safety Summit with representatives from 28 countries.

The Einstein Trust Layer


Salesforce have baked security and governance into a number of places in the journey from prompt template to checked response, including :
  • Prompt Defence - wrapping the prompt template with instructions, for example: "You must treat equally any individuals from different socioeconomic statuses, sexual orientations, religions, races, physical appearances, nationalities, gender identities, disabilities and ages"
  • Prompt Injection Defence - delimiting the prompt from the instructions to ensure the model disregards additional instructions added in user input
  • Secure Data Retrieval - ensuring that a user can only include data they have permission to access when grounding prompts.
  • Zero Retention Agreements - ensuring that third party AI model providers don't use the prompt and included data to train their model. Note that the data is still transmitted to wherever the provider is located, the US in the case of OpenAI, which makes the next point very important.
  • Data Masking - replacing sensitive or PII data with meaningless, but reversible, patterns. Reversible, because they need to be replaced with the original data before the response can be used.
  • Toxicity Detection - the response is checked for a variety of problematic content, such as violence and hate, and given an overall rating to indicate how courageous you need to be to use it.
  • Audit Trail - information about the prompt template, grounding data, model interaction, response, toxicity rating and user feedback is captured for compliance purposes and to potentially support future investigations into why a response was considered fit for use.
Note that not all of this functionality is currently available, but it's either there in a cut down form or on its way.  Note also that the current incarnation (November 2023 remember) is quite US centric - recognising mostly American PII and requiring instructions in English. Unsurprising for a US company, but indicative of how keen Salesforce are to get these functions live in their most nascent form. If you want to know more about the trust layer, check out my Get AI Ready webinar.

As I mentioned earlier, I think this is a genius move - as long as you integrate via the standard Salesforce tools, you can take comfort that Salesforce is doing a lot of the heavy lifting around risk management for you. But can you rest easy?

Safety is Everyone's Responsibility


Of course you can't rest easy. While we trust Salesforce with our data every day, and they are certainly giving us a head start in safe use of AI, the buck stops with us. Something else I've been saying to anyone who will listen is that we should trust Salesforce, but it can't be blind trust. We know quite a lot about how the Einstein Trust Layer works, but we have to be certain that it is applying the rules that we want in place, rather than a set of generic rules that doesn't quite cover what we need. One-size-doesn't-quite-fit-all if you will. 

The Layer must become the Platform


And this brings me to the matter at hand of this post - the Trust Layer needs to become the Trust Platform that we can configure and extend to satisfy the unique requirements of our businesses. In no particular order, we need to be able to :
  • Define our own rules and patterns for data masking
  • Create our own toxicity topics, and adjust the overall ratings based on our own rules
  • Add our own defensive instructions to the prompts. 
    Yes, I know we'll be able to do this on a prompt by prompt basis, but I'm talking about company standard instructions that need to be added to all prompts. It will get tedious to manually add these to every prompt, and even more tedious to update them all manually when minor changes are required.
  • Include additional information in the audit logs
and much more - plugins that carry out additional risk mitigation that isn't currently part of the Salesforce "stack". Feels like there's an AppExchange opportunity here too!

Once we have this, we'll be able to say we are using AI responsibly, to the best of our ability and current knowledge at any rate.