Saturday, 13 October 2018

All Governor Limits are Created Equal

All Governor Limits are Created Equal

Soql

Introduction

Not all Salesforce governor limits inspire the same fear in developers. Top of the pile are DML statements and SOQL queries, followed closely by heap size, while the rest are relegated to afterthought status, typically only thought about when they breach. There’s a good reason for this - the limits that we wrestle with most often bubble to the top of our thoughts when designing a solution. Those that we rarely hit get scant attention, probably because on some level we assume that if we aren’t breaching these limits regularly, we must have some kind of superpower to write code that is inherently defensive against it. Rather than what it probably is - either the limit is very generous or dumb luck.

This can lead to code being written that is skewed to defending against a couple of limits, but will actually struggle to scale due to the lack of consideration for all limits. To set expectation, the example that follows is contrived - a real world example would require a lot more code and shift the focus away from the point I’m trying to make. 

The Example

For some reason, I want to create two lists in my Apex class - one that contains all leads in the system where the last name starts with the letter ‘A’ and another list containing all the rest. Because I’m scared of burning SOQL queries, I query all the leads and process the results:

List<Lead> leads=[select id, LastName from Lead];
List<Lead> a=new List<Lead>();
List<Lead> btoz=new List<Lead>();
for (Lead ld : leads)
{
    String lastNameChar1=ld.LastName.toLowerCase().substring(0,1);
    if (lastNameChar1=='a')
    {
        a.add(ld);
    }
    else 
    {
        btoz.add(ld);
    }
}

System.debug('A size = ' + a.size());
System.debug('btoz size = ' + btoz.size());

The output of this shows that I’ve succeeded in my mission of hoarding as many SOQL queries as I can for later processing:

Screen Shot 2018 10 13 at 16 20 14

But look towards the bottom of the screenshot - while I’ve only used 1% of my SOQL queries, I’ve gone through 7% of my CPU time limit. Depending on what else needs happens in this transaction, I might have created a problem now or in the future. But I don’t care, as I’ve satisfied myself requirement of minimising SOQL queries.

If I fixate a bit less on the SOQL query limit, I can create the same lists in a couple of lines of code but using an additional query:

List<Lead> a=[select id, LastName from Lead where LastName like 'a%'];
List<Lead> btoz=[select id, LastName from Lead where ID not in :a];
System.debug('A size = ' + a.size());
System.debug('btoz size = ' + btoz.size());

because the CPU time doesn’t include time spent in the database, I don’t consume any of that limit:

Screen Shot 2018 10 13 at 16 23 59

Of course I have consumed another SOQL query though, which might create a problem now or in the future. There’s obviously a trade-off here and fixating on minimising the CPU impact and ignoring the impact of additional SOQL queries is equally likely to cause problems.

Conclusion

When designing solutions, take all limits into account. Try out various approaches and see what the impact of the trade-offs is, and use multiple runs with the same data to figure out the impact on the CPU limit, as my experience is that this can vary quite a bit. There’s no silver bullet when it comes to limits, but spreading the load across all of them should help to stretch what you can achieve in a single transaction. Obviously this means the design time takes a bit longer, but there’s an old saying that programming is thinking not typing, and I find this to be particularly true when creating Salesforce applications that have to perform and scale for years. The more time you spend thinking, the less time you’ll spend trying to fix production bugs when you hit the volumes that everybody was convinced would never happen.

 

 

Saturday, 6 October 2018

Dreamforce 2018

Dreamforce 2018

IMG 4447

2018 marked my 9th Dreamforce, although the first of these was staffing a booth in the Expo for 4 days so it’s difficult to count that. In a change of pace, the event started on day -1 with a new initiative from Salesforce.

Monday - CTA Summit

IMG 4432

The CTA Summit was pretty much my favourite part this year - an audience of CTAs and travelling bands of Product Managers giving us lightning (the pace, not the technology, although we did get some of that as well!) presentations and answering difficult questions - for me the questions were as informative as the presentations, especially if it was an area that I haven’t done a lot of work in. Nothing like knowing what others are struggling with so you can be ahead of the game.

The format was one that I’m reasonably familiar with, having been lucky enough to attend 3 MVP summits over the years, especially the constant reminders that you are under NDA and can’t share any of the content. Sorry about that! One thing I can share is that Richard Socher (Salesforce Chief Scientist) is an excellent speaker and if you get a chance to attend one of his talks, grab it. Some of the sessions were hosted at the Rincon Centre, where I saw a couple of outfits that looked really familiar.

IMG 4444

Tuesday - Keynote

Early start as the Community Group Leaders State of the Union session was a breakfast presentation at the Palace Hotel from 7am, then more CTA Summit sessions before heading over to Moscone Center.

For the first (no pun intended) time that I can remember, Marc Benioff’s keynote took place on Day 1. As an MVP I’m lucky enough to get reserved seating for the keynote and made sure I was there in plenty of time - queueing with Shell Black prior to security opening put us in the first row of reserved seating, three rows back from the stage.

IMG 4464

The big announcements from the keynote were Einstein Voice (conversational CRM + voice bots) and Customer 360. If you want to know more about these, here’s a slide from the BrightGen Winter 19 release webinar with links to the appropriate keynote recordings (if you can’t read the short links, you can access the original deck here

 

Screen Shot 2018 10 06 at 15 57 48

The keynote wasn’t the end of the day though, with CTA and MVP receptions on offer I was networking and learning late into the evening,

Wednesday - Speaking

After a couple of days listening to other people talking, it was my turn. First up was the Climbing Mount CTA panel session over at the Partner Lodge, where I was up the front in some stellar company. We even had parity between the male and female CTAs on the panel, which is no mean feat when you look at the stats, something the Ladies Be Architects team in the foreground and working hard to address.

DodPm5pVAAIjddj

 

After this I headed back to Trailhead in Moscone West, showing incredible self-control as the partner reception was just starting. I limited myself to a single bite sized corn dog and ignored the voice in my head telling me that a couple of beers would loosen me up nicely, and went straight over the speaker readiness room to remind myself of the content for my Developer Theatre session “Quickstart Templates with the Salesforce CLI” (picture courtesy of my session owner, Caroline Hsieh).

IMG 1632

Once the talk was over I continued in a tradition started last year and went out for a beer with some of the contingent from Sweden - Johan (aka Kohan, Mohan) Karlsteen, the creator of Top Trailblazers, and one of his colleagues. As is also traditional, we took a picture and included a snapshot of the guy who couldn’t make it so that he didn’t feel left out :)

DobGxdpWkAEwH97

 

Thursday - More Keynotes and the Return of @adamse

Thursday was the second highlight of the event for me - the Developer Keynote, closely followed by the Trailhead keynote. A big surprise at the Dev Keynote was the presence of Adam Seligman, formerly of Salesforce but now with Google.

DoIF5RaV4AAa6gW

And I had pretty good seats in the second row for this keynote, better than Dave Carroll in fact. Did I mention I’m an MVP.

DoIE1RzV4AAbR 9

 

The gap between the Dev and Trailhead keynotes was only 30 minutes, but as they are literally over the road from each other I made it in about 20 (there’s a few people attend Dreamforce, so crossing the road isn’t as simple as it sounds!). I typically sit a bit further back in this one as there is huge amount of exuberance in the front few rows and I try to avoid displaying any excitement or enthusiasm in public if I can. 

After the keynotes I caught a couple more sessions before heading over to Embarcadero for the annual BrightGen customer dinner. I’d said to my colleagues when I bumped into them on Monday night that I’d see them again on Thursday, Some of those that hadn’t been thought I was joking, but they weren’t laughing when I showed up at 7pm. 

I also got to catch up with the winner of the UK and Ireland social media ambassador at Dreamforce competition - long time BrightGen customer, Cristina Bran, who told me all about her amazing week.

DoKpmxgU8AAZizB

Friday - Salesforce Tower Trip

I’d been lucky enough to have my name picked out of the hat to get to visit the Ohana floor of the Salesforce Tower. It was a bit of a foggy day (what are the odds) but the views were still pretty spectacular.

IMG 4517

and that was Dreamforce over for another year!

Vegas Baby!

The BrightGen contingent traveled home via Las Vegas for a little unwinding and team building. Like the CTA Summit, what happens in Vegas stays in Vegas, so I can only show this picture of me arriving at the airport, still sporting some Salesforce branding.

IMG 4541

 

Thursday, 13 September 2018

Background Utility Items in Winter 19

Background Utility Items in Winter 19

Introduction

The Winter 19 release of Salesforce introduces the concept of Background Utility Items, Lightning Components that are added to the Lightning Experience utility bar but don’t take up any real estate, can’t be opened and have no user interface. This is exactly what I was looking for when I put together my Toast Message from a Visualforce Page blog - the utility bar component that received the notification to show a toast message doesn’t need to interact with the user, but still has an entry. The user can also click on the item and receive a lovely empty popup window:

Screen Shot 2018 09 08 at 08 09 52

Not the worst user experience in the world, but not the best either. I guess in production I’d probably put a message that this item isn’t user configurable. One item like this isn’t so bad, but imagine if there were half a dozen - a large chunk of the utility bar would be taken up with items that only serve to distract the user, although I’d definitely loo to combine them all into a single item if I could.

Refresher

In case you haven’t committed the original blog post to memory (and I’m not going to lie, that hurts), here’s how it works:

Toast

 

I enter a message in my Lightning component, which fires a toast event (1), this is picked up by the event handle in the Visualforce JavaScript, which posts a message (2) that is received by the Lightning component in the utility bar. This fires it’s own toast event (3) that, as it is executing in the one.app container, displays a toast message to the user.

Implement the Interface

Removing the UI aspect is as simple as implementing an interface - lightning:backgroundUtilityItem. Once i’ve updated my component definition with this (and changed the domain references to match my pre-release org):

<aura:component 
    implements="flexipage:availableForAllPageTypes,lightning:backgroundUtilityItem" 
    access="global" >

    <aura:attribute name="vfHost" type="String"
             default="kabprerel-dev-ed--c.gus.visual.force.com"/>
    <aura:handler name="init" value="{!this}" action="{!c.doInit}"/>
</aura:component>

When I open my app now, there’s nothing in the utility bar to consume space or attract the user, but my functionality works the same:

Toast2

You can find the updated code at my Winter 19 Samples github repo.

Related

 

Tuesday, 4 September 2018

Callable in Salesforce Winter 19

Callable in Salesforce Winter 19

Call

Introduction

The Winter 19 Salesforce release introduces the Callable interface which, according to the docs:

Enables developers to use a common interface to build loosely coupled integrations between Apex classes or triggers, even for code in separate packages. 

upon reading this I spent some time scratching my head trying to figure out when I might use it. Once I stopped thinking in terms of I and started thinking in terms of we, specifically a number of distributed teams, it made a lot more sense.

Scenario

The example scenario in this post is based on two teams working on separate workstreams in a single org, the Core team and the Finance team. The Core team create functionality used across the entire org, for the Finance team and others.

The Central Service

The core team have created a Central Service interface, defining key functionality for all teams (try not to be too impressed by the creativity behind my shared action names):

public interface CentralServiceIF {
    Object action1();
    Object action2();
}

and an associated implementation for those teams that don’t have specific additional requirements:

public class CentralServiceImpl implements CentralServiceIF {
    public Object action1() {
        return 'Interfaced Action 1 Result';
    }
public Object action2() { return 'Interfaced Action 2 Result'; } }

The Finance Implementation

The Finance team have specific requirements around the Central Service, so they create their own implementation - in the real world this would likely delegate to the Central Service and enrich with finance data, but in this case it returns a slightly different string (artist at work eh?) :

public class CentralServiceFinanceImpl implements CentralServiceIF {
    public Object action1() {
        return 'Finance Action 1 Result';
    }
public Object action2() { return 'Finance Action 2 Result'; } }

The New Method

Everything ticks along quite happily for a period of time, and then the Core team updates the interface to introduce a new function - the third action that everyone thought was the stuff of legend. The interface now looks like:

public interface CentralServiceIF {
    Object action1();
    Object action2();
    Object action3();
}

and the sample implementation:

public class CentralServiceImpl implements CentralServiceIF {
    public Object action1() {
        return 'Interfaced Action 1 Result';
    }
public Object action2() { return 'Interfaced Action 2 Result'; }
public Object action3() { return 'Interfaced Action 3 Result'; } }

This all deploys okay, but when the finance team next trigger processing via their implementation, there’s something rotten in the state of the Central Service:

Line: 58, Column: 1 System.TypeException: 
Class CentralServiceFinanceImpl must implement the method:
Object CentralServiceIF.action3()

Now obviously this was deployed to an integration sandbox, where all the code comes together to make sure it plays nicely, so the situation surfaces well away from production. However, if the Core team have updated the Central Service interface in response to an urgent request from another team, then the smooth operation of the workstreams has been disrupted. As the interface is a shared resource that is resistant to change - updating it requires coordination across all teams.

The Callable Implementations

Implementing the Core Central Service as a callable:

public class CentralService implements Callable {
   public Object call(String action, Map<String, Object> args) {
       switch on action {
           when 'action1' {
               return 'Callable Action 1 result';
           }
           when 'action2' {
               return 'Callable Action 2 result';
           } 
           when else {
               return null;
           }
       }
   }
}

and the Finance equivalent:

public class CentralService implements Callable {
    public Object call(String action, Map<String, Object> args) {
        switch on action {
            when 'action1' {
                return 'Callable Action 1 result';
            }
            when 'action2' {
                return 'Callable Action 2 result';
            } 
            when else {
                return null;
           }
       }
    }
}

Now when a third method is required in the Core implementation, it’s just another entry in the switch statement:

switch on action {
    when 'action1' {
        return 'Callable Action 1 result';
    }
    when 'action2' {
        return 'Callable Action 2 result';
    } 
    when 'action3' {
        return 'Callable Action 3 result';
    } 
    when else {
        return null;
    }
}

while the Finance implementation can remain blissfully unaware of the new functionality until it is needed, or a task to provide support for it can be added into the Finance workstream.

Managed Packages

I can also see a lot of use cases for this if you have common code distributed via managed packages to a number of orgs. You can include new functions in a release of your package without requiring every installation to update their code to the latest interface version - as you can’t update a global interface once published, you have to shift everything to a new interface (typically using a V<release> naming convention), which may cause some churn across the codebase.

Conclusion

So is this something that I’ll be using on a regular basis? Probably not. In the project that I’m working on at the moment I can think of one place where this kind of loose coupling would be helpful, but it obviously makes the code more difficult to read and understand, especially if the customer’s team doesn’t have a lot of development expertise.

My Evil Co-Worker likes the idea of building a whole application around a single Callable class - everything else would be a thin facade that hands off to the single Call function. They claim it's a way to obfuscate code, but I think it's just to annoy everyone.

Related

 

Monday, 27 August 2018

Lightning Emp API in Winter 19

Emp API in Winter 19

Introduction

It’s August. After weeks of unusually sunny days, the schools in the UK have broken up and the weather has turned. As I sit looking out at cloudy skies, my thoughts turn to winter. Winter 19 to be specific - the release notes are in preview and some of the new functionality has hit my pre-release org. The first item that I’ve been playing with is the new Emp API component, which takes away a lot of the boilerplate code that I have to copy and paste every time I create a component that listens for platform events.

From the preview docs, this component

Exposes the EmpJs Streaming API library which subscribes to a streaming channel and listens to event messages using a shared CometD connection. This component is supported only in desktop browsers. This component requires API version 44.0 and later.

What I Used to Do

Previously, to connect to the streaming API and start listening for events, I’d need code to:

  • Download the cometd library and put it into a static resource
  • Add the static resource to my component
  • Instantiate org.cometd.CometD
  • Call the server to get a session id
  • Configure cometd with the session id and the Salesforce endpoint
  • Carry out the cometd handshake
  • Subscribe to my platform event channel
  • Wiat for messages

What I do Now

  • Add the lightning:empApi component inside my custom component:
  • Add an error handler in case anything goes wrong
  • Subscribe to my platform event channel
  • Wait for a message

In practice this means my controller code has dropped from 70 odd lines to around 20,

Example

The first thing I need for an example is a platform event - I’ve created one called Demo_Event__e, which contains a single field named ‘Message__c’. This holds the message that I’ll display to the user.

My example component (Demo Events) actually uses a couple of standard components - the Emp API and the notifications library - the latter is used to show a toast message when I receive an event:

<lightning:empApi aura:id="empApi" /> 
<lightning:notificationsLibrary aura:id="notifLib"/>

The controller handles all the setup via a method invoked when the standard init event is fired. Before I can do anything I need a reference for the Emp API component:

var empApi = component.find("empApi");

Once I have this I can subscribe to my demo event channel - I’ve chosen a replayId of -1 to say start with the next event published. Note that I also capture the subscription object returned by the promise so that I can unsubscribe later if I need to (although my sample component doesn’t actually do anything with it).

var channel='/event/Demo_Event__e';
var sub;
var replayId=-1;
empApi.subscribe(channel, replayId, callback).then(function(value) {
      console.log("Subscribed to channel " + channel);
      sub = value;
      component.set("v.sub", sub);
});

I also provide a callback function that gets invoked whenever I receive a message. This simply finds the notification library and executes the showToast aura method that it exposes.

var callback = function (message) {
	component.find('notifLib').showToast({
      	"title": "Message Received!",
        "message": message.data.payload.Message__c
	}); 
}.bind(this);

On the server side I have a class exposing a single static method that allows me to publish a platform event:

public class PlatformEventsDemo 
{
    public static void PublishDemoEvent(String message)
    {
	    Demo_Event__e event = new Demo_Event__e(Message__c=message);
                Database.SaveResult result = EventBus.publish(event);
                if (!result.isSuccess()) 
        {
            for (Database.Error error : result.getErrors()) 
            {
                System.debug('Error returned: ' +
                             error.getStatusCode() +' - '+
                             error.getMessage());
            }
        }
    }
}

Running the Example

I’ve added my component to a lightning page - s it doesn’t have any UI you’ll have to take my word for it! Using the execute anonymous feature of the dev console, I publish a message:

Screen Shot 2018 08 25 at 13 51 12

 

And on my lightning app page, shortly afterwards I see the toast message:

 

Screen Shot 2018 08 25 at 13 49 55

 

More Information

 

Sunday, 5 August 2018

Putting Your TBODY on the Line

Putting Your TBODY on the Line

Table

Introduction

This week I’ve been working on a somewhat complex page built up from a number of Lightning components. One of the areas of the page is a table showing the paginated results of a query, with various sorting options available from the headings, and a couple of summary rows at the bottom of the page. The screenshot below shows the last few rows, the summary info and the pagination buttons.

Screen Shot 2018 08 04 at 17 34 06

The markup for this is of the following format:

<table>
<thead>
<tr>
<aura:iteration ...>
<th> _head_ </th>
</aura:iteration ...>
<tr>
</thead>
<tbody>
<tr>
<aura:iteration ...>
<td> _data_ </td>
</aura:iteration ...>
</tr>
...
<tr>
<td>_summary_</td>
<td>_summary_</td>
</tr>
<tr>
<td>_summary_</td>
<td>_summary_</td>
</tr>
</tbody>
</table>

So pretty much a standard HTML table with a couple of aura:iteration components to output the headings and rows. Obviously there’s a lot more to it than this, and styling etc, but for the purposes of this post those are the key details.

The Problem

Once I’d implemented the column sorting (and remembered that you need to return a value from an inline sort function, otherwise it’s deemed to mean that all the elements are equal to each other!), I was testing by mashing the column sort buttons and after a few sorts something odd happened:

Screen Shot 2018 08 04 at 17 35 10

The values inserted by the aura:iteration were sandwiched in between the two summary rows that should appear at the bottom.  I refreshed the page and tried again and this time it got a little worse:

Screen Shot 2018 08 04 at 17 33 32

This time the aura:iteration values appeared below both summary rows. I tested this on Chrome and Firefox and the behaviour was the same for both browsers. 

The Workaround

I’ve hit a few issues around aura:iteration in the past, although usually it’s been the body of that components rather than the surround ones, and I recalled that often the issue could be solved by separating the standard Lightning components with regular HTML. I could go with <tfoot>, but according to the docs this indicates that if the table is printed the summary rows should appear at the end of each page, which didn’t seem quite right.

I already had a <tbody>, but looking at the docs a table can have multiple <tbody> tags, to logically separate content, so another one of these sounded exactly what I wanted. Moving the summary rows into their own “section” as follows:

<table>
<thead>
<tr>
<aura:iteration ...>
<th> _head_ </th>
</aura:iteration ...>
<tr>
</thead>
<tbody>
<tr>
<aura:iteration ...>
<td> _data_ </td>
</aura:iteration ...>
</tr>
...
</tbody>
<tbody>
<tr>
<td>_summary_</td>
<td>_summary_</td>
</tr>
<tr>
<td>_summary_</td>
<td>_summary_</td>
</tr>
</tbody>
</table>

worked a treat. Regardless of how much I bounced around and clicked the headings, the summary rows remained at the bottom of the table as they were supposed to.

I haven’t been able to reproduce this with a small example component - it doesn’t appear to be related to the size of the list backing the table as I’ve tried a simple variant with several thousand members and the summary rows stick resolutely to the bottom fo the table. Given that I have a workaround I’m not sure how much time I’ll invest in digging deeper, but if I do find anything you’ll read about it here.

Related Posts

 

Saturday, 21 July 2018

Exporting Folder Metadata with the Salesforce CLI

Exporting Folder Metadata with the Salesforce CLI

Folder

Introduction

In my earlier post, Exporting Metadata with the Salesforce CLI, I detailed how to replicate the Force.com CLI export command using the Salesforce CLI. One area that neither of these handle is metadata inside folders, so reports, dashboards and earl templates. Since then I’ve figured out how to do this, so the latest version of the CLIScripts Github repo has the code to figure out which folders are present, and includes the contents of each of these in the export.

Identifying the Folders

This turned out to be a lot easier than I expected - I can simply execute a SOQL query on the Folder sobject type and process the results:

let query="Select Id, Name, DeveloperName, Type, NamespacePrefix from Folder where DeveloperName!=null";
let foldersJSON=child_process.execFileSync('sfdx', 
    ['force:data:soql:query',
        '-q', query, 
        '-u', this.options.sfdxUser,
        '--json']);

Note that I’m not entirely sure what it means when a folder has a DeveloperName of null - I suspect it indicates a system folder, but as the folders I was interested in appeared, I didn’t look into this any further.

I then created a JavaScript object containing a nested object for each folder type:

this.foldersByType={'Dashboard':{},
                    'Report':{}, 
                    'Email':{}}

and then parsed the resulting JSON, adding each result into the appropriate folder type object as a property named as the folder Id. The property contains another nested object wrapping the folder name and an array where I will store each entry from the the folder:

var foldersForType=this.foldersByType[folder.Type];
if (foldersForType) {
    if (!foldersForType[folder.Id]) {
        foldersForType[folder.Id]={'Name': folder.DeveloperName, 'members': []};
    }
}

Retrieving the Folder Contents

Once I have all of the folders stored in my complex object structure, I can query the metadata for each folder type - the dashboards in this instance - and build out my structure modelling all the folders and their contents:

let query="Select Id, DeveloperName, FolderId from Dashboard";
let dashboardsJSON=child_process.execFileSync('sfdx', 
    ['force:data:soql:query',
        '-q', query, 
        '-u', this.options.sfdxUser,
        '--json']);

I then iterate the results and add these to the members for the specific folder:

var foldersForDashboards=this.foldersByType['Dashboard'];
var folderForThisDashboard=foldersForDashboards[dashboard.FolderId];
if (folderForThisDashboard) {
    folderForThisDashboard.members.push(dashboard);
}

Adding to the Manifest

As covered in the previous post on this topic, once I’ve identified the metadata required, I have to add it to the manifest file - package.xml. 

I already had a method to add details of a metadata type to the package, so I extended that to include a switch statement to change the processing for those items that have folders. Using dashboards as the example again, I iterate all the folders and their contents, adding the appropriate entry for each:

case 'Dashboard':
    var dbFolders=this.foldersByType['Dashboard'];
    for (var folderId in dbFolders) {
        if (dbFolders.hasOwnProperty(folderId)) {
            var folder=dbFolders[folderId];
            this.addPackageMember(folder.Name);
            for (var dbIdx=0; dbIdx<folder.members.length; dbIdx++) {
                var dashboard=folder.members[dbIdx];
                this.addPackageMember(folder.Name + '/' + dashboard.DeveloperName);
            }
        }
    }
    break;

In the case of our BrightMedia appcelerator, the package.xml ends up looking something like this:

<types>
    <members>BG_Dashboard</members>
    <members>BG_Dashboard/BrightMedia</members>
    <members>BG_Dashboard/BrightMedia_digital_dashboard</members>
    <members>Best_Practice_Service_Dashboards</members>
    <members>Best_Practice_Service_Dashboards/Service_KPIs</members>
    <members>Sales_Marketing_Dashboards</members>
    <members>Sales_Marketing_Dashboards/Sales_Manager_Dashboard</members>
    <members>Sales_Marketing_Dashboards/Salesperson_Dashboard</members>
    <name>Dashboard</name>
</types>

Exporting the Metadata

One thing to note is that the export is slowed down a bit as there are now six new round trips to the server - three for each of the folder types, and three more to retrieve the metadata for each type of folder. Exporting the metadata using the command:

node index.js export -u <username> -d output

creates a new output folder containing the zipped metadata. Unzipping this shows that the dashboard metadata has been retrieved as expected:

> ls -lR dashboards

total 24
drwxr-xr-x 5 kbowden staff 160 21 Jul 15:29 BG_Dashboard
-rw-r--r-- 1 kbowden staff 154 21 Jul 14:27 BG_Dashboard-meta.xml
drwxr-xr-x 5 kbowden staff 160 21 Jul 15:29 Best_Practice_Service_Dashboards
-rw-r--r-- 1 kbowden staff 174 21 Jul 14:27 Best_Practice_Service_Dashboards-meta.xml
drwxr-xr-x 6 kbowden staff 192 21 Jul 15:29 Sales_Marketing_Dashboards
-rw-r--r-- 1 kbowden staff 180 21 Jul 14:27 Sales_Marketing_Dashboards-meta.xml

dashboards//BG_Dashboard:
total 48
-rw-r--r-- 1 kbowden staff 2868 21 Jul 14:27 BrightMedia.dashboard
-rw-r--r-- 1 kbowden staff 6917 21 Jul 14:27 BrightMedia_digital_dashboard.dashboard


dashboards//Best_Practice_Service_Dashboards:
total 88
-rw-r--r-- 1 kbowden staff 8677 21 Jul 14:27 Service_KPIs.dashboard

dashboards//Sales_Marketing_Dashboards:
total 96
-rw-r--r-- 1 kbowden staff 10317 21 Jul 14:27 Sales_Manager_Dashboard.dashboard
-rw-r--r-- 1 kbowden staff 8046 21 Jul 14:27 Salesperson_Dashboard.dashboard

One more thing

I also fixed the export of sharing rules, so rather than specifying SharingRules as the metadata type, it specifies the three subtypes of sharing rule (SharingCriteriaRule, SharingOwnerRule, SharingTerritoryRule) required to actually export them!

Related Posts