Pages

Sunday, 14 January 2024

Scratch Org Snapshots in Spring '24

Note: This feature is in beta in Spring '24. Like all other betas, this functionality may never go GA and may disappear at any time. Caveat emptor.

Image generated by DALL-E 3 based on a prompt by Bob Buzzard

Introduction

The Spring '24 release of Salesforce moves the scratch org snapshot functionality into beta. I've been waiting to get my hands on this and so far it hasn't disappointed.

Whenever we get new features of this nature, I like to reflect on how far we've come in Salesforce land. In this case I was testing with the codebase of our BrightMEDIA accelerator, and when we started building this in mid-2014 (aka nearly a decade ago), we typically allowed a week to get a new developer set up. We had to spin up a Developer Edition, raise a bunch of tickets to get various features enabled and increase the Apex character limit, install a number of packages, carry out a number of manual setup steps,  deploy the code and assign permission sets. For whatever reason, no two Developer Editions appeared to have the same setup, so typically the deployment was an iterative process where we discovered what was missing or off instead of on by default. Then they'd go through and set up some standing data to be able to work in the org. 

Fast forward to the end of 2023 and I have a node script that creates a scratch org, installs the packages, deploys the code, loads the standing data and produces a ready to go development environment in around 30 minutes. I'm always interested in speeding things up though!

Creating a Snapshot

Thanks to a pre-release environment that I've also had for a decade, I have a pre-release dev hub which meant I could enable the beta before the Spring '24 release goes live. Then I assigned myself the appropriate object permissions for Org Snapshots and I was ready to create. 

I set up my scratch org using my existing script, which creates an org with the following applied:

  • Four managed packages
  • Approximately 9,000 metadata components
  • Approximately 2,000 records
Creating a snapshot of this org took 11 minutes, which I must admit was quite a bit faster than I was expecting.

Using the Snapshot


This started off with a bit of a challenge, in that attempting to use the snapshot kept giving the error that the snapshot wasn't Active, but listing the Dev Hub snapshots showed that it was indeed Active. I spent a while searching through the CLI Github issues list and the snapshot pilot Trailblazer group, but it seemed like I was the lucky one who got to experience this first. This was quite soon after the pre-release had gone live, so I figured it might be a simple bug and played the waiting game.

About 7 hours later my masterful inactivity was rewarded, as my snapshot sprang to life and I was able to run the commend to create an org from it. In fairness, it might have started working 10 minutes later, but it was around 7 hours later I had the time to try it out again.

The even better news was that creating a scratch org from the snapshot took 6 minutes - an 80% saving on the 30 minute creation time for my script. The org was flawless too - all the metadata and data was there.

The End of Sandboxes?


So does this mean that we can all create scratch org snapshots rather than sandboxes going forward? They even contain data, so maybe we can do away with full or partial copy sandboxes too.  I don't think so, for a few reasons.

Lifespan


Scratch orgs and org snapshots, have a 30 day lifespan. From a developer perspective this is fine - we treat these orgs and disposable and typically create a fresh one when we start a new piece of work. That isn't necessarily the case for orgs used for training, QA, integration testing or testing against a new release. It's particularly unsuitable for pre-production environments which mirror production - imagine having to recreate all your test integrations at the start of every month!

Storage


Scratch orgs and org snapshots are limited to 200Mb of data. Again, probably fine for many development tasks, but again likely to be too small for training, pre-production and test environments that are indicative of production. 

Licenses


Sandboxes replicate your production org licenses, so all of your users can have access. Scratch orgs are a much more restrictively licensed, usually somewhere between 1 and 10 seats per feature. When we were adding community (now Experience Cloud) features to BrightMEDIA, we had the princely sum of 1 partner community license available in our scratch orgs - you'd have to be quite brave to promote to production with that kind of limitation on your testing!

Completeness of Version Controlled Metadata


This is where developer/developer pro sandboxes will retain their usefulness once scratch org snapshots are live. Some organisations with large, mature Salesforce orgs won't have all their metadata in version control, because why would they invest the time and money to do that when they don't need to. They'll likely have Apex, flows, lightning components, and maybe some second generation packages in version control, but things like sharing rules, report and dashboard folders, duplicate rules that are managed by administrators probably won't. Yes this is a sweeping generalisation, but you get the general idea. Being able to create a guaranteed replication of production to work in will be an important capability for years to to come in my view.  That said, they'll probably become less used as time goes on and maybe scratch org snapshots get longer lifespans.

So not a sandbox killer, but that was never the intention. For those of us with a very source-centric development approach however, this is another great addition to the developer toolbelt.

More Information



Sunday, 7 January 2024

Breaking Batch

Image generated by DALL-E 3 from a prompt by Bob Buzzard

Introduction


In my last blog post (A Tale of Two Contains Methods) I mentioned that I'd spent quite a bit of December taking part in Advent of Code.  Each day there were two challenges - a (relatively) straightforward one, that could potentially be brute forced, and an extended version where brute forcing would take days so using the a more thoughtful approach was required. As I was tackling these challenges using Apex, brute forcing wasn't really an option, so my solution typically involved building structures of complex objects in memory in order to be able to process them quickly. Pretty much every extended version required batch Apex to handle the volumes, and in a few cases the (relatively) straightforward one did too.

The combination of the complex object structure and batch Apex threw up some interesting errors, so I decided to blog about one of these. A couple of things to note:
  • This isn't a moan about batch Apex - I was using it in a way that I'm pretty sure it wasn't intended for, and there was a simple workaround
  • By complex object I just mean one that is made up of primitives, simple(r) objects and collections - it doesn't mean it was a particularly difficult structure to comprehend or change.

The Challenge


(Some of the challenge detail has been removed for clarity - you can see it in its full glory here)
Part 1 of the challenge in question was around bricks of varying length in a 3-dimensional structure (essentially a large cube) that had landed on top of each other like a weird Jenga puzzle. Based on the starting coordinate and dimensions of each brick, I needed to figure out how the bricks were supported in the structure. 

The approach I took was to represent a brick as an object and hold two associated collections for each Brick instance:
  • Supporters - these are the Bricks that are directly beneath this Brick and in contact with it.
  • Supporting - these are the Bricks that this brick is directly beneath and supporting. 
The answer I had to calculate to complete the challenge was number of bricks that I could remove without causing any other bricks to fall. This could be accomplished by iterating the bricks and adding up all of those where all of the Supporting bricks are also supported by others. 

Part 2 was to find sum of the bricks that would fall if each of the bricks were removed. With the structure that I had in place, this was actually quite simple. I iterated the bricks, found all of the Supporting entries where that brick was the only Supporter, and then found all of their Supporting entries where they were the only Supporter and so on until I reached the end. This would definitely need batch Apex though, as there were 1,500 bricks in the actual challenge input.

Each challenge includes a small example with the workings and answers - 6 bricks in this case - so I was  able to test my batch Apex before executing with the larger volume of data.

My Brick class was as follows:
public class Brick
{
    public String brickNo;
    public Point3d startPoint;
    public Point3d endPoint;
    public Integer width;
    public Integer depth;
    public Integer height;
    public Set<Brick> supporters=new Set<Brick>();
    public Set<Brick> supporting=new Set<Brick>();
    public Integer totalSupporters=0;
}
The start method of the Batch class converted the input into a collection of Bricks and then returned a collection of Integers, one per Brick. I implemented Database.Stateful so that the collection of Bricks was available across each execute method, and then processed the Bricks who's brickNo appeared in the scope. Essentially I'd broken up my iteration of the Bricks across a number of transactions, while ensuring I only had to build the Bricks structure once at the start.

When I ran this with the example, it worked fine and gave me the correct answer. 

The Problem


I then fired it off with the (much larger) challenge input, and was initially pleased to see that I was able to build the in-memory structure without running into any issues around heap or CPU. Sadly this pleasant sensation was short lived, as the first batch that executed generated the following output:


Based on the debug that I had in the class, it was clear that the batch job was failing before it was getting to any of my code. After some binary chop style debugging, where retried the batch with various parts of the code commented out, it turned out that the issue was my collections:
    public Set<Brick> supporters;
    public Set<Brick> supporting;

As I already had the full collection of Bricks stored in a Map keyed by brickNo, turning these into sets of Strings and storing the brickNo rather than a reference to the Brick itself didn't need much in terms ot changes to the code, and allowed the batch to complete without issue.

So why were Sets of Strings okay by Sets of Bricks not? Once I was into a large cube with 1,500 bricks in it, it looked like the sets got pretty big. As the Bricks were stored in an instance variable, they were part of the state of the batch and thus de/serialised for each batch processed. Obviously I'm not privy to exactly how the batch processing in Apex works, but I'd imagine that serialising ended up with a pretty huge structure with a lot of repetition, as the same Brick instances were expanded many times as part of  the Supporters and Supporting collections. Deserialising this structure clearly proved too much, hence the internal error. 

In Conclusion


As mentioned earlier, this isn't intended to throw shade on batch Apex. Storing large collections of complex objects that contain collections of other complex objects so they can be accessed across transactions really isn't a valid use case. This kind of information belongs in the database rather than in the batch class, while Database.stateful is more appropriate for managing things like running totals.

This is one of the reasons that I really enjoyed taking on Advent of Code with Apex - I'm trying to solve problems that (a) I'd never encounter in a customer implementation and (b) the Salesforce platform is really not suited to handling.

This was also a lesson in the need to test with indicative data - everything worked fine with the small amount of test data I had available, but once I hit the real data the flaws were revealed!

Related Posts