Complete Developer Introduction to Salesforce Marketing Cloud

In this guide, I will summarize (in some detail) the high-level features that are available to you as a seasoned developer on the Salesforce Marketing Cloud platform. I will review basics of the platform, some advanced platform features, platform-specific programming languages, integration support, API support, platform SDKs, and I’ll also discuss next steps you may wish to take in terms of Salesforce certifications. The Salesforce Marketing Cloud platform is an Enterprise grade solution and well suited for much more than basic email subscriptions. After reading this guide you’ll have a good understanding of what is possible on the solution and how you can launch, iterate, and transform your Marketing efforts at your business and what is expected from your dev team to support those efforts.

On June 4th, 2013, Salesforce signed a definitive agreement to acquire the company ExactTarget. With this acquisition, Salesforce also acquired the Pardot brand. If you are a current Marketing Cloud user, you may even have noticed some of the URLs still use the exacttarget.com domain. It’s important to understand some of the key differences between Marketing Cloud and Pardot. They both fall under the larger Marketing Cloud umbrella, but the ExactTarget solution is more suited to transaction B2C marketing (think tickets, hotel reservations, Ecommerce) and the Pardot solution is more suited to B2B marketing (think sales alignment, lead management, automated nurture campaigns). These are not exclusive requirements, but they are generally true use case scenarios.

This guide will focus primarily on the Salesforce Marketing Cloud (ExactTarget) product and the features around it. Pardot is also an excellent product, and could even be used to Marketing teams who are still less mature in terms of marketing automation capabilities and looking to dip their toes into a simpler product to get started before migrating to Marketing Cloud.

Platform Basics

A quick summary of the key capabilities of the Marketing Cloud platform are multi-channel marketing (email, mobile, social, advertising, interactions), campaign and journey lifecycle management, and contact management that encompasses a full picture of your customer across all the various marketing touchpoints that your business engages on (including those outside of the digital space, like ATM machines, kiosks, in-store purchases, etc).

Marketing Cloud maintains several ‘Studio’ products: Email Studio, Audience Studio, Mobile Studio, Social Studio, Advertising Studio, and Interaction Studio. Each product has it’s own user interface that is specific to that marketing channel, but all solutions are connected to each other. Licensing is unique and based on several scenarios around # of contacts, # of business units (unique subdivisions of your business), license tier, features, and so on and licensing is beyond the scope of this article and I’d advise you to work with your Salesforce rep to help you price out a solution based on your business needs.

In this article, I’m going to focus on a number of the features specific to Marketing Cloud in general, and also to Email Studio specifically since that is the most common use case. Marketing Cloud will manage all of your contacts, your data related to those contacts for manual & intelligent segmentation, and also track the engagement of your marketing contacts across all of those various channels. Think of Marketing Cloud as your hub for all of your marketing efforts across every single channel and sharing and managing that data across every touchpoint.

Advanced Platform Features

Once you move beyond contact, campaign, and data management features of Marketing Cloud that encompass all of the core behaviors you would expect in a digital marketing solution, you will find there are some additional capabilities that separate Marketing Cloud from the competition. As a comparison, there are many great CRM solutions for just basic contact, account, and lead management and Sales Cloud can cover those cases as well, however, if this is the extent of your needs then it may not be the most economical solution for you.

Enhanced Marketing Cloud features include support for on platform programming languages (more below), enterprise grade API support (more below), advanced automations (including SQL queries, automated file imports/exports), complex journeys (A/B testing, personalization), data extensions (basically table data), AI/ML based sends with Einstein, and more.

The final advanced feature to really focus in on, is the idea that Marketing Cloud has you covered for all of your marketing channels. You won’t need Hootsuite for social, PowerBI for reporting, Twilio for mobile messaging, and others connected to your CRM all with an iPaaS solution like Jitterbit to connect all of those disparate business systems together. You may find yourself in this solution outside of Salesforce Marketing Cloud because while other CRMs may offer channel management for these others touchpoints, they do not offer the same competitive features and performance that is offered by Salesforce.

Platform Programming Languages

A key feature of the Salesforce platform, is the ability to personalize email, sms mesages, and push notifications with dynamically information that is rendered at the point the message is sent. There are actually three distinct languages that can be used on Marketing Cloud.

  • Server Side Javascript (SSJS): This is the legacy programming syntax used by Marketing Cloud that executes the code on the server when the messages are sent. The SSJS syntax is more familiar to traditional Javascript you may already be used to, and has better features for error handling.
  • AMPScript: This language is the most commonly used language on Marketing Cloud today and may remind you of ASP/PHP code that is mixed with markup. AMPScript can change messages dynamically for the subscriber it is being sent to, it can pull in data extension data (like Ecommerce order details, or suggested items), it can make HTTP call outs, and do a bunch of other dynamic processing for message sends.
  • Guided Template Language (GTL): Based on the Handlerbars templating language, GTL is useful for simpler scenarios. As a developer, I would concentrate your efforts on AMPScript, but there are use cases for GTL as well. This is the newest offering from Salesforce for messaging personalization.

One additional programming language to mention, but not specifically used to tailor messages is the SQL syntax. Marketing Cloud encourages the use of SQL queries for reporting, and for creating new filtered subsets of existing data extensions (tables of data) based on specific criteria. This can be useful for you to segment your marketing efforts around more complex criteria.

Integrations

Once you’ve acclimated yourself to Marketing Cloud and all that it has to offer in terms of the on-platform behavior, you may also wish to understand how to connect Marketing Cloud to your outside business systems like your CMS/CRM/ERP/CDP/DAM or other 3 letter business system acronym (technology is full of these isn’t it?). Thankfully, Marketing Cloud has two robust API solutions to cover the majority of scenarios with their SOAP and REST API.

The SOAP based API will enable you to perform core marketing behaviors around subscriber management, email sends (both batch and transactional/triggered), track your engagements, and also manage your automations. If there are core behaviors to consider around email marketing, the SOAP API may be the solution you’re looking for.

For all other scenarios, the REST API will cover your needs for managing content (templates, images, reusable blocks), event notifications (password resets, order confirmations), Journey builder management (enrolling users), GroupConnect chat (i.e. Facebook messenger), MobileConnect (i.e. sending SMS), Personalization builder (Einstein based recommendations), and transactional/triggered sends.

As you can see, most of the operations that can be performed by Marketing Cloud on the platform, can also be performed via the SOAP or REST api connectivity. Even the more complex business cases can be handled through integration. This is particularly useful if you, for example, wanted to connect your Markteing Cloud solution to your Ecommerce website and enroll new users in a journey when completing a purchase, send them transactional messages like in-stock notifications, send them SMS messages for their shipping alerts, keep in touch with them via Facebook for support on their orders, and more.

To simplify some of the API connectivity, there are also Platform SDKs for all of the major programming languages (.NET, PHP, Ruby, Node, Java) that will allow you to plugin to the connectivity without having to know specific API surfaces and handle the authentication, security, and batching that goes along with the integration and works in a specific language you are already familiar with.

Thankfully, if you are looking to connect your Marketing Cloud instance to other software solutions such as Sales Cloud, you will benefit from Marketing Cloud Connect. The Marketing Cloud Connect solution will bridge your standard and custom objects and syndicate that data over to Marketing Cloud as data extensions. It can also plug in to your journey builder schema to create tasks, leads, update contacts, etc based on the steps the user is currently engaged in during your defined journey. It really allows for a cohesive solution between Sales and Market Clouds.

Additionally, there are third party solutions that can plug into Marketing Cloud for specific requirements. Examples of this may be FormAssembly for any forms you want to collect information from, Clutch Loyalty for any loyalty program management, and CMS connectors like those from Sitecore for sharing data/information between your CMS and your Marketing Cloud instance.

Lastly, as you grow your business you may consider the benefit of an iPaaS solution like Jitterbit or Mulesoft, which act as an integration hub for all of your business systems, allowing you as the CIO/CTO to easily swap in and out business software without having downstream effects on all of the other business systems in your software ecosystem.

Certifications

Now that you are familiar with a detailed overview of the Salesforce Marketing Cloud solution as a whole, you may be curious on how to continue to iterate on your newfound knowledge. One way to continue to skill up on the solution is to work towards one of the various certifications that exist for the platform.

  • Marketing Cloud Email Specialist: The email specialist exam is a general purpose exam that covers the core components of the Marketing Cloud system. It is also a pre-requisitive for the Developer and Consultant exams.
  • Marketing Cloud Administrator: The administrator exam is useful for anyone who plans to be the individual responsible for the maintenance and general system level configuration of the Marketing Cloud solution and is similar in nature to the Administrator exam offered by the Salesforce Platform. There is no prerequisite needed for this exam.
  • Marketing Cloud Developer: If you see yourself in a developer role, writing AMPScript, connecting to the APIs, authenticating the system to external systems, configuring the data model (basically everything we covered in this post), then the developer exam would be a good one for you to pursue. You will also need to complete the email specialist exam as a prerequisite.
  • Marketing Cloud Consultant: If you see yourself in a customer facing role, assisting your client (or business as a representative) make decisions on how to utilize the Marketing Cloud system, how to speak the language the developers need for their requirements, and have a detailed understanding of the features available, then the consulting exam would be a good fit for you. You will also need to complete the email specialist exam as a prerequisite.

Additional Resources

There are a number of resources available for anyone who is looking to gain more information and continue their path forward learning more about Salesforce Marketing Cloud. I will provide a short bulleted list here, but there are many more linked from these resources worth considering as well.

Summary

In closing, if you’ve made it this far, you should have a relatively detailed understanding of the high-level components of Salesforce Marketing Cloud and how you would fit into that role working on the platform as a developer/architect. There is an exceptional amount of power and tooling to support the modern marketer on this solution, and a lot of room for a developer to assist in connecting those systems together and support the marketing team in their efforts. I hope this post gave you some inspiration and ideas on future steps to pursue as you grow your understanding of what the solution can do. Cheers.

 

Serverless Salesforce Messaging with Azure Functions

Introduction

The new computing paradigm of serverless allows developers to quickly spin up, prototype, and build out applications using inexpensive usage based resources. In this article, we’re going to go over how to connect your Salesforce Outbound Messages to Azure Functions for processing. As of this writing, as far as I’m aware, there are no governor limits on Outbound Messages and also Azure Functions are priced at $0.20/million executions with 1 million in the free grant per month. If you’re not already comfortable working with Outbound Messages in .NET, I’ve got you covered with a post on this topic I did previously.

Pre-requisites

Some things you’ll want to be sure are setup before proceeding below, are to ensure that you have the Azure Functions Core Tools setup on your machine. The guide that walks through install steps for Windows, MacOS, and Linux is at the top of this document. These are command line tools you’ll use to execute your Azure Functions code. You’ll also want to ensure you have the Azure Functions extension for VSCode installed.

Initialize Your Azure Function

After ensuring you have the Azure Functions tooling setup and configured properly, create and navigate to an empty directory on your machine and run “code .” to begin VS Code from the current directory. Go ahead and run the command palette command (ctrl+shift+p on windows) in VS Code to run “Azure Functions: Create Function…” command. For the language, go ahead and select your favorite (I’m using Javascript), and select the trigger type as HttpTrigger.

Azure Function Code Sample

Update your index.js file (the one that holds your trigger code) to the example below:

let xmlParser = require("xml2json");

module.exports = async function(context, req) {
  let sfData = null;

  // convert incoming raw soap xml message to json.
  try {
    sfData = xmlParser.toJson(req.body, { object: true });
  } catch (ex) {
    console.log(ex);
    context.res = {
      status: 500,
      body:
        "An error occurred trying to parse the incoming xml.  Check the log."
    };
  }

  // for diagnostics output the actual message
  context.log(JSON.stringify(sfData["soapenv:Envelope"]["soapenv:Body"]));

  // Send the acknowledgement response back to Salesforce.
  var resXml =
    '<?xml version="1.0" encoding="UTF-8"?>' +
    ' <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">' +
    "  <soapenv:Body>" +
    '    <ns3:notificationResponse xmlns:ns3="http://soap.sforce.com/2005/09/outbound">' +
    "      <ns3:Ack>true</ns3:Ack>" +
    "    </ns3:notificationResponse>" +
    "  </soapenv:Body>" +
    " </soapenv:Envelope>";

  context.res = {
    status: 200,
    body: resXml,
    headers: { "content-type": "application/xml" } // be sure to denote content type
  };
};

The code above can be broken into three separate components. The first step is we are parsing the raw incoming XML (Salesforce uses SOAP messaging for it’s outbound messages). You’ll need to npm install the package xml2json. The second piece is we are logging the content of the soap message, which contains the actual message contents, the object’s id, the organization’s id, and some other details. The final step is we send the acknowledgement XML back to Salesforce so it recognizes we have confirmed receipt of the incoming message. Salesforce will continue to attempt to send the message over and over for up to 24 hours if it does not receive an acknowledgement.

Running Azure Functions Locally

To test our code locally, we are going to open the terminal window in VS Code (or open a command prompt and navigate to the project directory) and run two commands. The first command we will run is func start which will begin running our function code and it will begin listening (likely on localhost:7071). Using the tool ngrok, we’ll want to map that 7071 port on our machine to a public facing URL and also then update our Outbound Message definition so that it points to this URL. The specific command we will run is ngrok http 7071 and we’ll then see a screen like this:

From here, I know that I need to specify that my Outbound Message definition points to https://7655c914.ngrok.io/api/SalesforceMessageHandler [SalesforceMessageHandler is what I named my Azure Function when I created it].

Now, once we have triggered out Outbound Message, we will see in our output window for our func start command, that our message has been received, and we are properly logging the output to the console. You could always take this output and then process it immediately, but my recommendation would be to log the message somewhere (Azure SQL, Azure Service Bus, etc) and then process it on a separate thread. My rationale for this is because it makes your application more durable and allows you to recover easier from any errors since you may lose messages if you are unable to log them for a 24 hour period.

Conclusion

Wrapping up here, we’ve gone over one way you can use Azure to integrate with Salesforce. I plan to have some additional posts in the future, like Azure Logic Apps and other services that can be leveraged in tandem with Salesforce. I’ll also add here that Salesforce has recently introduced Salesforce Evergreen as a way to handle serverless code executions directly on the platform, so you’ll have plenty of options in the future.

 

Developer Introduction to Salesforce CMS

Introduction

CMS stands for Content Management System. The most well-known CMS today is WordPress. Common components of a CMS are content administration features (authoring tools for content like blogs, news, events, white papers, etc) in addition to content delivery capabilities that renders the content for consumption, such as a web page or over an API.

How Does Salesforce CMS Work?

Below is a Salesforce produced marketing material video (about 2 mins in length), that does a good job highlighting the general capabilities of the CMS. One thing you’ll note, is that Salesforce CMS can operate in a headless manner, meaning that content is able to be served across many channels, but is all consolidated in one location. You can think of this like a bicycle wheel’s hub-and-spoke, where Salesforce is the hub for all of your content, and the various channels are the spokes (Commerce, Blogs, Print material, Emails, and so on). It does this by serving the content through an API, or via pre-built connectors on Commerce Cloud, Marketing Cloud, or Communities. You can also serve content through social media channels. The value proposition Salesforce is offering it’s customers, is that you can author and maintain all of your content inside Salesforce, then serve that content anywhere, inside or outside Salesforce.

Core Salesforce CMS Capabilities

There are several features, common to other CMS systems (such as WordPress, Sitecore, Kentico, Sitefinity) that are also present in Salesforce CMS. The advantage, for you the developer, is that you are able to leverage familiar Salesforce developer tooling (Salesforce DX, Lightning Web Components, Salesforce API) to deliver and present that content to your customer’s as another customer touchpoint, enhancing the 360 degree view of your customers and providing them with modern experiences.

A high-level breakdown of features/functionality follows:

  • Content Workspaces: Content is able to be organized into ‘workspaces’ that may be unique per marketing campaign, per calendar year, or any other logical grouping that makes sense.
  • Content Workflows: Your team is able to assign content managers, and content authors, each with their own permission, the ability to create draft and published versions of content.
  • Multilingual Content: In addition to content being able to be delivered in your organizations native language, you are also able to denote additional languages that content should be translated into and the system supports the translation user interface to support that.
  • Omnichannel Delivery: You are able to denote which channels: Marketing, Commerce, Communities you wish the workspace content to appear in.
  • Custom Content Types: Currently this isn’t available via the UI, but I will highlight below how to create your own content types using the Tooling API.
  • Experience Builder Components: More information below, but there are two new Experience Builder components called CMS Single Item and CMS Collection that can be used to render content on the screen.

I expect this list of features to grow with each future release, and I plan to have several future posts regarding Salesforce CMS capabilities as well.

Experience Builder

For those of us with prior experience building Salesforce Communities, you may have used the Community Builder feature. This feature is now referred to as ‘Experience Builder’ and I believe will continue to be the basis for developing content rich customer facing experiences in the future. It uses the same Salesforce tooling we already know and appreciate (DX, LWC, API). Combined with new Spring 20′ features such as Flexible Layouts that allow mobile-friendly page layout design, Experience Bundle that allows human-readable deployment options for community experiences, and other recent features it is really paving the way for Salesforce CMS to be a great option in the future for companies evaluating a way to have a full dot-com content rich solution. It’s still early days for Salesforce CMS, but I do anticipate it will continue to grow and evolve over time to add even more capability.

Creating Custom Content Types

One challenge I ran across when first engaging with the Salesforce CMS was how to create custom content types. Custom content types (News, Blogs) are defined in Salesforce as being of the object type ManagedContentType which derives from the Metadata type. Using the Salesforce Tooling API, it is possible to create new content types. The default just shows News, and you aren’t (currently) prompted to specify a content type when selecting new record for content types. A full-write up on how to create those new content types is on the Salesforce Developer Blog. I would re-create that here, but it would be redundant.

One additional note I should add here is the steps to setup Salesforce CMS which primarily involves enabling communities, assigning some profile permissions, and a few other minor steps.

Looking Ahead

The CMS landscape is full of very innovative and mature technology companies, such as WordPress, Sitecore, Adobe Experience Manager, Contentful, Contentstack, and a host of other solutions. The advantange, in my opinion, or using Salesforce CMS for you the developer, is that you are able to use existing platform knowledge to service this feature for your business. Additionally, with the other platforms noted above, you are able to surface content outside of Salesforce, but with Salesforce CMS, you are additionally able to surface your content across your various Salesforce channels, such as Marketing Cloud, Commerce Cloud, and so on, so there is truly a unified experience across all of your channels. I admire the number of features and capabilities Salesforce continues to deliver 3x a year, and fully expect Salesforce CMS to continue to grow and mature with each release.

More information about Salesforce CMS is available here.

 

Working with Salesforce Outbound Messages in .NET

What are Outbound Messages?

Outbound Messages in Salesforce are another way for your application to receive notifications of changes to your Salesforce data. Imagine you have a scenario, where you would like your external application to be notified whenever a Lead record in Salesforce is created (or edited) and the City field on the Lead is set to Chicago. It is possible in Salesforce, using an Outbound Message which is triggered from a Workflow Rule like the one I just described. I’ll discuss in detail what both of those Salesforce features are in this post.

We have already discussed the Streaming API here in the past, but there are a few key differences to observe with Outbound Messages:

  • There are no API limitations around the number of Outbound Messages which can be sent. The Streaming API has a set number of events which are allocated to you.
  • Outbound Messages require you to expose a public URL which receives the notifications, and there are some security considerations to be aware of. The Streaming API does not require a public facing URL and can run behind a firewall.
  • Workflow Rules allow a finer set of criteria based on field filters, or a formula field, which can give you more control over which notifications your application receives.
  • There is no ‘Replay’ behavior in Outbound Messages and there is no guarantee that notifications will come in the order they occurred. You will receive an Id for each notification, but there is no way to replay notifications. Similarly, if a record is changed before your message is sent, it may only contain the latest data from the most recent change.

Important note: You should be cognizant of creating circular/infinite loops with Outbound Messages. If you create a Workflow Rule that says to create a message every time an account is changed, and that updates the opportunity, which fires a rule that updates the account, you may find yourself creating a difficult to troubleshoot scenario. To circumvent this problem, create a Profile in Salesforce which corresponds to a user which can receive outbound notifications, but has the “Send Outbound Messages” field set to off on the profile. This way the user can receive messages but updating changes will not trigger additional cascading notifications.

Defining an Outbound Message

Creating our Outbound Message in Salesforce will be a two step process. The first step will be to define the actual message. Think of this as defining the data model for the process. Later, when we discuss the Workflow Rule, this will be the controller aspect that determines when our messages are sent. Head over to your Setup menu inside your sandbox/developer org and use the Quick Find behavior to locate “Process Automation > Workflow Actions > Outbound Messages” (1). From the list of current outbound messages, select “New Outbound Message”. On the first screen, indicate we are creating a message for the Lead record type.

On the next screen, specify a name (2) for our message that is representative of what is being delivered in each notification. Also, provide a generic Endpoint URL (3). We are going to change this later, so just set this to a valid URL for now. Also, enable the Session ID (4) so we get a parameter which indicates who triggered the message being sent. Also, specify the fields you’d like to receive in your payload (5) when the notification is delivered. As a suggestion, try to limit this to just the fields you need. This will reduce the transmission time over the wire and also reduce the time to deserialize the messages as well.

Once the message has been saved/created, the final step we need to perform is to capture the Endpoint WSDL value. Click the link that says “Click for WSDL” and save the contents that results on your workstation with the name “leads.wsdl”.

Creating the Workflow Rule

The next step in our two step process, is to create a Workflow Rule which will trigger our Outbound Message to actually be delivered. In setup, enter “Workflow Rule” in the Quick Find and select “Process Automation > Workflow Rules”. On the screen that lists your current workflow rules, select the “New Rule” button and designate that we are creating a rule that works on the Lead object.

On the next screen, provide an informative name for your rule. In your evaluation criteria you have three options. When the records are created and meet criteria, when they are created or edited and meet criteria, and the final option to evaluate the record only when the record changes from it’s original state to meet the criteria (i.e. the city was Phoenix, but it is now Chicago and criteria is city = Chicago). We’ll select the second option (3) which runs every time our criteria is met, even if that criteria didn’t change.

For the rule criteria, we’re given a set of contextual (based on the object type we’re working with) parameters we can use to determine if our rule should be executed. We could switch this to a formula also, but that is outside the scope of our tutorial. For now, let’s stick to Lead: City is equal to Chicago (4). On the next screen, select the “Add Workflow Action > Select Existing Action” option just below the block for Immediate Workflow Actions. From here, indicate the action type is “Outbound Message” and pick the message we created in the Outbound Message creation step above. Click “Save” and we are done with the Workflow Rule.

Create the Listener Service

We’ve reached the development portion of the tutorial. Because .NET Core does not support the server/hosting side of SOAP services, we’re going to use a .NET 4.6.1 project to complete our integration. The first step is to ensure your wsdl.exe command is in the path of your workstation. If the command “wsdl.exe -?” does not produce a valid response, you will need to track this down.

Once we have everything ready, let’s run the command below. The leads.wsdl should have been captured from the first step where we setup the outbound message: wsdl.exe /serverInterface “c:\path\to\your\leads.wsdl” /out:NotificationServiceInterface.cs

This command will generate the server/hosting side contracts (/serverInterface) for the WSDL service since Salesforce is sending the message to us. Make a note of where your NotificationServiceInterface.cs file is located because we’ll be adding this to our project in a moment. The next step is to create a new “ASP.NET Web Application (.NET Framework)” to indicate we want the full version of .NET. You can create this with the “Empty” template type since we aren’t doing anything else here other than just prototyping something.

Once this is done, add a new folder called Infrastructure, and add your previously generated NotificationServiceInterface.cs file to it. You may need/want to update the namespace for that file to ensure it matches your solution. The last step, will be to “Add New Item” to the Infrastructure folder we created, and specify the type as “Web Service (ASMX)” and name our file MyNotificationListener.asmx. The contents of that file should be as follows:

using System.Web.Services;

namespace Salesforce.OutboundMessageExample.Web.Infrastructure
{
    /// <summary>
    /// Summary description for MyNotificationListener
    /// </summary>
    [WebService(Namespace = "http://tempuri.org/")]
    [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)]
    [System.ComponentModel.ToolboxItem(false)]
    // To allow this Web Service to be called from script, using ASP.NET AJAX, uncomment the following line. 
    // [System.Web.Script.Services.ScriptService]
    public class MyNotificationListener : INotificationBinding
    {
        public notificationsResponse notifications(notifications n)
        {
            notificationsResponse r = new notificationsResponse();
            r.Ack = true;
            return r;
        }
    }
}

Your solution should look like this when completed.

Important Security Considerations

There are some important security changes you should apply when running this application in production, and this is a good point to discuss some of them:

  • Ensure that your service is white-listed to only allow Salesforce IP addresses.
  • Use TLS/SSL endpoints for your services to ensure there are no parties which could intercept your sensitive Salesforce data.
  • An OrganizationId parameter is included in each message. Validate this on each incoming message to a locally stored parameter for your organization.
  • If your application allows (or requires) it, download the Salesforce Client Certificate to validate the message is indeed coming from Salesforce.

Testing Locally with Ngrok

Because Salesforce needs to send a message to your application, and it cannot read localhost as a publicly accessible endpoint, we need to provide Salesforce with an address it can use to post messages to us. This is where Ngrok comes in. Ngrok allows you to create a public URL which will tunnel to your local instance of your application. In the first step, when creating the Outbound Message, we just provided a dummy URL. We’re going to update that now to a valid one.

Start the debug for your application, and make a mental note of the port. On my machine the address is: http://localhost:55475/Infrastructure/MyNotificationListener.asmx. Armed with this information, and a downloaded copy of ngrok which is in my system %path% variable, I will run the command: ngrok http 55475 -host-header=”localhost:55475″  .  You can replace 55475 with the port for your project.

You can then see we’re given an http/https URL to use (1) and also a web proxy we can use to monitor requests/responses (2). If I were to request https://6a920c28.ngrok.io/Infrastructure/MyNotificationListener.asmx the result should look the same as if I had requested the page on localhost:55475. Armed with this information, head back over to the Outbound Message you created in the first step, and update the Endpoint URL parameter with this new public URL you have just created.

Firing the Workflow Rule

Ok, we’re pretty much all set at this point. The last item we need to do is head to the Workflow Rule we created in the second step and click “Activate” on the rule. If we don’t activate the Workflow Rule, we won’t get any messages delivered to our endpoint. Ok! Are you ready? Now comes the exciting part. Go head over to the “Leads” section of Salesforce and edit one of the leads which already exist there (or create a new one). Set any values you like. The only important thing to consider is to ensure that the “City” field is set to “Chicago” per our Workflow Rule. One you save the record, your breakpoint should be hit and you can debug the incoming object to see all of the information you are being provided about the record. Boom!

Viewing Outbound Message Status

Having any trouble? Head over to to the Setup location “Environments > Monitoring > Outbound Messages”. Here you will see a snapshot of messages which are queued for delivery as well as any recent failures. A helpful description will be provided as well. In my case, you can see a 404 not found was provided because I didn’t have ngrok running.

Summary

Outbound Messaging is a great way to receive notifications of changes in your Salesforce platform based around specific criteria. You can be as specific or generic as you’d like. Salesforce will deliver up to 100 notifications at a time to your application. It will also try to re-transmit those messages for up to 24 hours, but will gradually increase the gap duration between retries for each failure up to 2 hours between attempts. I hope you enjoy this feature and are able to utilize it to keep your applications in sync with Salesforce.

 

Getting Started with the Salesforce Streaming API in .NET Core

Introduction

Have you ever wished you could get a notification whenever a record type or subset of records changed in Salesforce? You may wish to be able to be notified of those changes for replicating data or to trigger behavior or an external business system.

Enter the Streaming API from Salesforce. This feature consists of several subfeatures which enable you to just that. Opting in to certain changes is possible with Push Topics. Getting a complete list of changes of everything is possible with Change Data Capture. We’re going to talk about both in this post and how you can integrate your .NET applications with Salesforce to capture those changes in near real-time.

If you want to see a full breakdown of the Stream API components and capabilities, you can do so here. Also, please be sure your account meets the criteria for the Streaming API. If you have a developer sandbox from https://developer.salesforce.com, you should already be set.

Finally, I recommend you download the Postman collection I created for Salesforce here, and also the sample .NET Core application I created for connecting to the Streaming API.

CometD, Bayeux Protocol, and Long Polling

The Streaming API is possible thanks to a clustered web messaging technology known as CometD. This event messaging system is built on top of the Bayeux protocol which transmits messages over HTTP. When you connect your .NET Core application to the Salesforce Streaming API, you are not constantly checking for new messages every so often. You are actually establishing an open (long-polling) connection, and you are waiting/listening for those changes to be pushed out to you.

Below is a diagram that shows the various connections that are made when establishing a connection to the Streaming API. The client first performs a handshake to establish a long polling connection. If successful, it sends through a subscription to the channel you want to subscribe to (more on this below). A channel represents which type of event notifications you wish to receive. Once the channel subscription is open, the long-polling sequence takes over and you maintain this connection listening for changes.

One advantage the Streaming API has over the Outbound Messages feature from Salesforce, is that you can run this from behind a firewall on your network without exposing an HTTP endpoint for Salesforce to push records to. You can read more about CometD, Bayeux Protocol, and Long Polling on the Salesforce documentation site.

Streaming API Limits

Just like with our previous posts on the REST API and on the Bulk API, Salesforce enforces limits around how many events can be captured per 24 hour period with the Streaming API. The limits are in place because Salesforce is a multi-tenant application, and resources are shared among customers. Constraints are considered beneficial because they promote good software design, and when you are developing your applications to connect to the Streaming API, you should be cognizant of the limits in place (which may vary from environment to environment, or customer to customer).

What is Change Data Capture?

Change Data Capture, which is set to formally be deployed in the Spring 19 (tentatively set at Feb. 8, 2019) release, is currently in developer preview.

Below is an example payload you would receive when subscribe to a Change Data Capture channel when one of those entity types changed:


A few points are worth calling out in the payload json which gets sent to us are worth talking briefly about. The first is the entity type (1) which was changed. Change Data Capture allows you to subscribe to certain channels, such as /data/AccountChangeEvent for just Account changes, or in the case of the payload above all changes via /data/ChangeEvents (5). More on subscription channels for Change Data Capture is available here.

The next piece of information we get is the changeType (2) which outlines what type of operation happened. We’re also given just the data which changed (3) so we can limit the size of our payload and know what exactly needs to be updated.

Finally, we’re given a replayId (4) which can be used as a sort of audit trail for this change. Salesforce supports the notion of durable messages which means that these events are saved on the platform, and can be ‘replayed’, for a period of time (1 – 3 days). When you establish your ‘subscription’ in the connection diagram above, you can also provide a replay value. A value of -1 (default) means you want just new messages on this channel. A value of -2 means you want everything available (be aware of your api limits and processing time required for this). A value of X (a prior replayId) means you want everything after a certain event in time you had captured (assuming it falls in the availability window).

I won’t be discussing replays in this post, partly because I’m still working on updating my sample code to support them, but if you want to learn more about them you can see a nice write-up here.

Setting up Change Data Capture Events

Inside your development org, head over to Settings > Integrations > Change Data Capture. Here you will see a screen that looks similar to the screenshot below. Listed are two columns: Available entities which you can subscribe, and Selected entities to which you have already subscribed to. Select a few examples, such as Lead, Account, and Contact. You can also subscribe to changes to custom objects you have created as well.

For each standard entity you selected, you will be able to retrieve those events via /data/<Type>ChangeEvent channel. Custom objects are available via a /data/<CustomObject>_ChangeEvent channel. If you want to subscribe to all changes for all of your selected entities, the /data/ChangeEvents channel will allow you to do that.

Subscribing to Change Data Capture Events

I’ve created a sample project which connects to the Salesforce Streaming API and set it up on Github which you can use as a starting point to begin receiving notifications. You’ll need to be sure to update the appsettings.json file with your relevant values. As an important aside, please be sure to use user secrets for any development and environment variables or azure key vault for any production configuration to ensure you are safely maintaining credentials for your orgs.

Once you’ve updated your settings, you can run the application. I’d encourage you to step through the code as well, which first establishes an OAuth token with your credentials, then performs a channel subscription to the channel you provided, which I’ve defaulted to /data/ChangeEvents to see all changes. If you have trouble connecting, be sure you can connect to the REST API using Postman and then try again.

As you can see in the screenshot above, we’ve managed to successfully connect to our org (1), perform a handshake to establish a long-polling connection (2), and designate our channel subscription (3, 4). Once the application began waiting for changes, we updated a Lead record and changed the company field to “American Banking Corp2” which you see outlined in our change data capture payload (5).

That about wraps it up for Change Data Capture. There is certainly a lot more to explore, but if you are looking to replicate every change to certain record types in your org for replication or other purposes, you can obtain that information using this feature of the Streaming API. There is also a great Trailhead module on Change Data Capture if you want to apply this badge to your Trailhead account and learn more.

What are PushTopics?

Push Topics are a way to receive more granular notifications based on a SOQL query you define. If you’re not familiar with SOQL, it is very similar to SQL but with a few minor differences around how records are joined. You can see a few examples here. Salesforce analyzes record changes against your SOQL query, and based on criteria outlined below, determines if the record should be pushed out to you.

Another parameter to consider when developing your Push Topic SOQL query, is the NotifyForFields value. Your options for this are: All, Referenced (default), Select, and Where. All means you want all field changes pushed when the record matches the WHERE clause of your query. Referenced means you would like only changes to fields in your SELECT and WHERE, for all records which match the WHERE clause. Select indicates you would like only changes to fields in your SELECT statement, for all records that match your WHERE clause. Where is just like the Select behavior, but instead of basing changes on fields in the Select, it monitors changes for fields denoted in the Where clause only.

Setting Up PushTopics

Thankfully, it’s possible to create Push Topic records with the REST API. You can download the latest Postman collection which includes these calls. To create a push topic, we create a new sObject with a name (1), which we will later use to subscribe to our channel at
/topic/Chicago_Leads. We also provide a SOQL query (2) which indicates Name, FirstName, LastName, and Industry in the fields (Id is always required) and a where clause looking for Leads just in Chicago. We also designate a NotifyForFields parameter of Select which means we’ll get push notifications when the FirstName or LastName change (since they are in the Select fields) but only for leads in Chicago.

Once the push topic has been created we get an Id back (4) which we can use for updating this topic (if needed). One point to also mention here, is we also included the Name field in our query. Name is a compound field, and Salesforce has some special rules around notifications for compound fields like name and address when creating push topics.

Subscribing to PushTopics

Updating our appsettings.json file in our demo application to set the channel to “/topic/Chicago_Leads”, and connecting to the channel (1), then in our browser editing a lead to change the Industry, where that lead had the City = ‘Chicago’, will result in a streaming API event being pushed to us (2).

Another way we can test our Push Topics, is to use the Salesforce Workbench utility. After logging in, we navigate to Queries > Streaming Push Topics. Here we can create a new test Push Topic (there is no option to specify the NotifyForFields parameter, so assume Referenced), and we can view the stream of events/messages that are happening.

Salesforce has a nice Trailhead module on Push Topics and the Streaming API if you’d like to learn more about this subject.

Summary

To recap, the Streaming API is a great way to connect an externally hosted application or dataset to the Salesforce system to keep that system up to date with changes that are happening in Salesforce. There are a number of options for pushing just a certain predefined set of records with Push Topics, or getting a fire hose of all the changes with Change Data Capture, the Streaming API enables us as developers to keep systems in sync with each other.

 

Processing Large Amounts of Data in Salesforce with Bulk API 2.0

Running into limits with your Salesforce REST API usage, or simply proactively looking to process a large amount of data in Salesforce? Due to the limits in place on how many calls you can make to the REST API on a per 24-hour period that may limit how much data you can process an alternative approach is needed. Thankfully, with the Bulk API, you can process up to 100 million records per 24-hour period, which should (hopefully) be more than enough for you.

If you need help authenticating with the Salesforce API and getting up and running, or using Postman to manage requests and collections, I’ve written about those both previously. Also, if you would like a way to generate lots of sample data to test with, I’ve written about the Faker JS library which could help with that too. (Note: Faker is available on other platforms also: .NET, Ruby, Python, and Java).

As a final note, you can download the Postman collection I’ve been using for my Salesforce series on Github if you want to follow along that way.

Create the Job

The first step in creating a Bulk API request is to create a job. As part of the job request, you specify the object type, operation (insert, delete, upsert), and a few additional fields. In this example, we’re using the upsert operation paired with the External_Id__c field. If you don’t have have this external id field on your Lead object, you can see my REST API post where I talk about working with external keys which is very important when connecting external business systems. More information about the request body is available here.

Once the response is returned to us, we can see we’re given a job id, as well as some additional information about the job, including the job’s state, which is currently set to Open. Open state means the job is ready to begin accepting data. We also have a Postman “Test” attached to this request, which will save the {{job-id}} parameter which is returned and can then be used in future requests.

As a bonus, you can also view your bulk data jobs in the Salesforce web administration panel. Just head to Setup > Environments > Jobs > Bulk Data Load Jobs’ and you can see your currently open jobs as well as any jobs you completed in the last 7 days.

Checking the Job Status

A common request when working with bulk jobs, is to check the status of the job. We’ll do this now, and see the job is still in the ‘Open’ state, ready to accept data, but in the future when we use this, we may see our job in any of the following states: Open, UploadComplete, InProgress, JobCompleted, Aborted, or Failed.

We’re also given the contentUrl parameter here, which is where we want to post our actual CSV data.

Uploading the Data

Once we are armed with an ‘Open’ job, ready to accept data, we’re ready to start sending data to the job. We do this by sending a PUT request to the contentUrl from our job. In the screenshot below you can see we’ve set our Content-Type to text/csv, and are sending the CSV data in the request body. Here you can match fields for your job’s object type (Lead in this case). If you were sending something like Contact records, you could also specify a field for Account.External_Id__c (assuming you had external id setup on Account) to properly link the contact records to the corresponding Account object.

There are also limits to be aware of when sending your CSV data to the job. The current limit is 150 megabytes of base64 encoded content. If your content is not already base64 encoded, consider limiting your size to 100 megabytes since Salesforce converts your request to base64 upon receipt and this can result in up to a 50% increase in size.

Marking the Job Complete

We briefly mentioned job state earlier when talking about job status. Once you have finished sending the batches to your job, you will need to mark your job completed by setting the status to UploadComplete. If you for some reason didn’t want to finish processing the job and wanted to discard it, you could also set the job state here to Aborted. Once the job is marked as UploadComplete, Salesforce will shortly thereafter mark the status as InProgress, and shortly after that, as JobCompleted or Failed.

Obtaining the Job Results

Salesforce makes two endpoints available to obtain the results of your bulk job’s successful and failed records. Once your job’s status is JobComplete, you can make a request to /jobs/ingest/{{job-id}}/failedResults to obtain the records that failed to process, including an example of why those records were unsuccessful. You’ll always get back a sf__Id and sf__Error in your response so you can handle these in your application appropriately. Similarly, you can also send a request to /jobs/ingest/{{job-id}}/successfulResults to obtain the records which were successfully processed. Inside the success records, you’ll also receive a sf__Id and also a sf__Created property which indicates if the record was created (or modified).

Summary

In this post we discussed the Salesforce recommended way to work with larger datasets. The Salesforce REST API is great for handling transactional records, or even working with up to 25 records at a time with the composite and batch REST endpoints, but for larger recordsets, up to 100mb in size, the preferred way is using the Bulk API. We’ve done a pretty good job (in my apparently not so humble opinion) covering the Bulk API features, but be sure to review the official docs if you want to learn more.

 

Understanding OAuth 2.0 Web Server Flow In Salesforce

Introduction

There are three common ways to authenticate with the Salesforce API. Username/Password flow, User-Agent flow, and Web Server flow. There are subtle but important differences for each of them, so let’s briefly discuss what each of them does:

Username/Password Flow – Works on a secure server, and assumes the developer/application already has a set of API-enabled credentials they are working with.

User-Agent Flow – Works on client apps that reside on a client’s device or browser. This is preferred for Javascript applications where secret storage can not be guaranteed.

Web-Server Flow – Works on a secure server, but does not have credentials for the user available. Assumes application can safely store the app secret.

In this post, we’re going to work on how you can test and develop against Salesforce, using the Web-Server Flow, locally on your machine. As an added bonus, we’ll look at how to make those urls public with ngrok.

Salesforce Setup for Connected App

In your Salesforce developer account, navigate to Settings > Apps > App Manager. Click “New Connected App” in the top-right corner, and provide the following settings. It should look similar to the screenshot below when you are finished.

Connected App Name: OAuth2 Demo
API Name: OAuth2_Demo
Contact Email: <enter_your_email_address_here>
Enable OAuth Settings: checked
Callback URL: https://localhost:5001/salesforce/callback
Selected Oauth Scopes: Access your basic information (id, profile, email, address, phone)

Once you’ve finalized the setup of your connected app, be sure to make note of the ‘Consumer Key’ and ‘Consumer Secret’ which will be used in the sample project.

Sample Project Setup

I’ve posted a sample project on Github that you can download here to follow along. You’ll need to update your appsettings.json file with the client-id (Consumer Key) and client-secret (Consumer Secret) from your connected app you defined earlier, but that should be all that is necessary to run the demo application, even without knowledge of .NET.

After running the application, you’ll see the following in the browser

The link that is generated here comes from SalesforceClient.cs. This client factory takes in your appsettings.json settings and formulates them into a URL that the user is redirected to. Embedded in it the client-id for your application. There are a lot of additional options, such as state data you want passed through, display options, and more that you can set within this link outlined here.

After the user authenticates with Salesforce, they are prompted to allow your application access to the scopes you defined in the connected app. Your app name is also presented to the user. If the user selects the ‘Allow’ button, they will be redirected back to the URL you specified in the ‘redirect-uri’ parameter you specified. The URL will look something like: https://localhost:5001/salesforce/callback?code=aWekysIEeqM9PiThEfm0Cnr6MoLIfwWyRJcqOqHdF8f9INokharAS09ia7UNP6RiVScerfhc4w%3D%3D

The code parameter in the URL is the piece we are most interested in. This authorization code allows us to call the oauth2/token endpoint with the grant_type set to authorization_code. You can see an example of this in the SalesforceClient.cs file as well. If you’ve reached this point, congratulations, you now have an access token to use to make API requests. I’ve written about all the great things you can do with the Salesforce API here.

Bonus: Make a public facing URL with Ngrok

Ngrok is a tunneling application that allows you to forward public facing urls to local urls on your workstation. After downloading it (and adding to your operating system’s path which I’d recommend), run the following command:

ngrok http 5000

This will give you a window similar to the one below. Note how we are using the insecure, 5000 port. If you really want to forward to the SSL endpoint, there is documentation on how to achieve this here: https://ngrok.com/docs#tls-cert-warnings but I won’t be going over this. Suffice it to say, you may run into a 502: Bad Gateway if you do this.

Once you’ve done this, you can update your connected application configuration in Salesforce and replace the https://localhost:5001/salesforce/callback URL with the new ngrok URL you have here (i.e. https://4218e857.ngrok.io/salesforce/callback). You’ll also need to update this in your appsettings.json file for your application. This new URL will forward to port 5000 on your machine that you set on the command line when you ran the ngrok executable.

The bonus of doing this, is that you can share this URL with your customer, or your project manager, to give them a ‘preview’ of how the application will by providing them with a public facing URL. If you want to learn more about ngrok, Twilio has a nice write-up here.

 

Introduction to the Salesforce REST API (using Postman)

This post is going to be a rather lengthy introductory course on the Salesforce REST API. If you’re just looking for the Postman collection, or would like to just follow along, click here. We’ll discuss authentication, basic read operations, SOQL queries, batch & composite queries, and querying with an external key. We’ll also touch on the Salesforce workbench.

In future posts, I’ll discuss creating, updating, and deleting data with the REST API. I’m also planning posts on the Bulk API and Streaming API. If you aren’t already familiar with the Salesforce ecosystem, I’d encourage you to read this post first. Additionally, you’ll need Postman on your machine to get the most benefit from this post.

Salesforce Setup

Let’s get started. If you haven’t done so already, you’ll need to setup a developer account on https://developer.salesforce.com.

Once you’re logged in, from the gear icon in the top-right, navigate to Setup. From the ‘Quick Find’ box on the left, type in “App Manager” and select the menu item with the same name. On this screen, you’ll see a number of pre-built connection points. Let’s add our own by selecting “New Connected App” in the top-right.

Enter the following information and click “Save”:

Connected App Name: Postman Client
API Name: Postman_Client
Contact Email: <enter_your_email_address_here>
Enable OAuth Settings: checked
Callback URL: http://localhost/callback
Selected Oauth Scopes: Access and manage your data (api)

Once this is setup, you’ll be notified that it takes a few minutes for Salesforce to finalize your setup. Make a note of the “Consumer Key” and “Consumer Secret” that are listed on the app screen. The final piece of information you need is a security token. Navigate to the profile settings (see below) and select the Reset My Security Token item and follow along with the instructions to obtain one we’ll use for logging in.

A quick word about security tokens. These are not needed (and may be preferable to not use for better security) if you are on a network with a static IP and can white-list that address inside the ‘Settings > Security > Network access’ menu item. If you have this set you can skip setting the security token for the rest of the article. You could also just use the token now, and then change your approach in production. I am just using this approach to allow everyone to follow along.

Postman Setup

If you haven’t done so already, please import the Postman collection by downloading it here. You can either download and extract the zip, or copy/paste the raw text. Pressing ctrl+O or navigating to File > Import in Postman will let you perform either to import the collection. When you’re done, it should look similar to this:

You’ll also need to setup an ‘Environment’ for Salesforce, which you can see int the top-right of the screenshot above. You’ll need to set the values for your client-id, client-secret, and also your username/password. I strongly encourage storing these values in the Environment instead of in the Postman request so they can be shared securely with others. Below is an example of what your Postman environment for Salesforce should look like when it’s done.

Be sure you’re also creating the fields for instance-url and access-token above even though they are empty. More on this below.

Authentication

Ok, so with the setup ceremony out of the way, let’s start having fun. Below is an example authentication request to Salesforce. Note this is a POST request, sent x-www-form-urlencoded with a set of key/value pairs which are coming from our environment (in the top-right as Salesforce). Also, please observe how the {{password}} and {{security-token}} fields appear right next to each other concatenated.

Something we’re also going to review here is the “Tests” tab of this request. Tests are a fantastic feature in Postman, and they are worth learning more about. For the sake of this post, we are just going to use the Test feature to set our environment variables for {{access-token}} and {{instance-url}}. This is slick, because now we taken our json response, parsed it, and stored pieces of the response into our environment and can reuse that data automatically in our future requests.

API Limits

I think it’s important to make our first ‘real’ request to the Salesforce API as a request to obtain the ‘limits’ for your account. API limits are an important concept in Salesforce as this is a SaaS/multi-tenant environment and you are sharing resources with other users. Note below, we are using our {{access-token}} as stored from our authentication test.

The limits you see here are for a 24-hour period. Our developer account is limited to 15,000 API requests per day, and we have 14,999 remaining, having used our first one to inquire about the limits for our account. A keen eye will also note our “Postman Client” app we defined earlier, has a limit range but nothing is set. Apps can have their own API limit quotas potentially as well, and may be something a Salesforce admin sets for your app.

Limits encourage good software design and should be embraced. You’ll want to consider limits and supporting application design principals to ensure your application can work within those limits. One quick note is if you are looking to modify a large amount of data, take a look at the Bulk API which I will discuss in a future post.

Basic Queries

The format for single-object queries in Salesforce is typically: /services/data/<version>/sobjects/<objectType>/<salesforceId>. An example of this is below. The last part of this query is a Salesforce Id, which is a 15-character case-sensitive key, and will never change, even if the record is deleted, then later undeleted.

For the example below, the record identifier is likely different for your system, so be sure to head to the Sales app inside your Salesforce portal and filter the Accounts screen to view all, and select one from the list. Salesforce always puts the identifier for the current record in the URL, so you can grab one from your instance there. (ex: https://na85.lightning.force.com/lightning/r/Account/0011U000006ee9iQAA/view)

Something you may see in the future, is querying with a custom object or a custom property. Salesforce differentiates standard and custom objects with a “__c” postfix on object types and properties which are not standard. If I had created a custom object called “MyCustomObject” I would query this with “/MyCustomObject__c/” in the URL. We’ll see this again when we talk about querying with external keys later in the post.

Additionally, when querying an object, you can filter the fields which are returned. Do you see how appending the fields querystring parameter allows us to narrow down the resulting record to just the pieces of information we’d like to retrieve? The smaller payload here will result in improved performance over the wire and also faster deserialization time in our applications.

SOQL Queries

Once you’re comfortable with basic querying in Salesforce, a good next step is the SOQL query syntax. SOQL is often used inside the Salesforce ecosystem inside their Apex language programming paradigm, but we can also leverage it here for REST calls as well. As you can see (1) the language is very SQL-like in nature with the one big difference (to me) being joins.

The result set we get back from the query has our results (3) and also a few interesting fields to note (2). The first is the totalSize and done parameters. If the value for done is false, you will also see a top-level property called nextRecordsUrl that will look like: /services/data/v44.0/query/01g2100000PeqVoAAJ-2000/ (yours will be slightly different). You would then take this URL and use it to retrieve records 2001 – 4000 by re-running this request with the new URL and applying your same SOQL query to the end of this nextRecordsUrl parameter.

Querying with an External Key

Most often when you are using the Salesforce REST API, you are doing so to connect it to an external business system which has it’s own record identifiers/keys. Salesforce understands this notion, and allows you to create properties on your objects which are of type “External Id”.

To setup one of these fields, navigation to Setup (1) > Object Manager (2) > Select your relevant record type, Lead in this case (3) > Fields & Relationships (4) and then add a new record of type text (5) as the type of parameter for your key.


On the following screen specify a name for your field, and an API/Field name will be auto-generated for you (1). Note that anything which is a custom field or custom object type in Salesforce will always be referenced with “__c” at the end of it. So in a bit, we’ll see how this field “External_Id” actually maps to “External_Id__c” in the API. Finally, the checkbox at the end (2) indicates that this field is a unique identifier for a record in an external system.

Now that we have our External Id field setup, we can perform queries against it. You can also use this to update records as well which I’ll talk about in the future, including how to reference parent/child records using the external id on each (i.e. Accounts + Contacts). See how instead of using the Salesforce Id in the request, we’ve included the External_Id__c property in the URL and then referenced the record which has a value in this field of 21456.

Batch & Composite Queries

Salesforce offers two ways to execute multiple ‘requests’, which they refer to as subrequests, via a single API request in the form of batch queries & composite queries.

Let’s start with batch queries as shown in the screenshot below. In our payload (1), you’ll see we can provide multiple subrequests in an array. The batch query format supports up to 25 subrequests per API request, and each subrequest executes independently. Each subrequest also counts against our API limit noted earlier. The advantage here is we reduce our round-trip processing time and improve the speed of our applications.

In our response we can see any errors were returned (2) and the status code for each independent subrequest/response (3) along with the payload that would be returned. The batch API can also be used to perform PATCH, POST, and DELETE to modify your Salesforce data, which I’ll discuss in the future.

Composite queries are similar to batch queries, but different in a few key ways. First, the individual component subrequests do not count against your API limits. Second, the subrequests are executed in order.

Also, note how the response object from a prior query (1) can later be referenced in a subsequent query (2) in the chain. Each response from a composite query is wrapped in a body object (3). Similar to the batch API, you can also use this to create, update, and delete records instead of just querying it.

Salesforce Workbench

There is a great utility application available at: https://workbench.developerforce.com/. This portal allows to you do the same operations we described above when prototyping your applications with Postman. Not only do you gain access the REST API in a nice visual interface, but you can also test SOQL queries, subscribe to Push Topics (to be notified when data changes in Salesforce), view standard and custom object metadata, and more.

Below is an example of the “REST Explorer” in the workbench, available by going to Utilities > Rest Explorer after logging in. You can click through each of these endpoints and see everything Salesforce exposes via the API.

Final Thoughts

Salesforce provides developers with a well designed, properly versioned, full-featured API to use to connect your applications to their ecosystem. There are multiple ways to read as well as modify data inside the Salesforce system that allow you to work in a performant and responsible way that allows all users of the platform to maintain similar performance in their requests as well. If you enjoyed this post, I’d love to hear how it helped you. You can find me on Twitter and Github. Also, I’d love if you subscribe to my semi-regular newsletter of fun things I’ve found online.

 

Statically Typed Newsletter: Volume 1

Inspired by Tim Ferris’s 5-bullet Friday and Scott Hanselman’s Newsletter of Wonderful Things, I’ve decided this would be a great place for me to share some of the humorous, educational, insightful, or just generally weird things I found online. It could be videos, book recommendations, gadgets, recipes, and so on.

My plan is to make this an email-based newsletter. Do not be afraid weary internet traveler, your information is not being sold to anyone else, nor am I going to try to enroll you in my time-share vacation home program. It’s just a place for me to share what I personally find interesting enough to share with other like-minded individuals, such as yourself.

If you like what you see, please sign-up for newsletter at the bottom of the page. Without further ado, below is an example of what you would receive with each volume:

If you’ve made it this far, thanks for reading. I plan to share more in the future, so please opt-in to the newsletter below if you’d like to be notified when I do. Also, I’d love it if you shared this list with a friend or co-worker.

 

Improving Model Validation for Entity Framework Core 2.0

If you’ve done .NET programming against a SQL data store for any length of time, you’ve likely run into the following error:

String or binary data would be truncated. The statement has been terminated.

This is an unfortunately common scenario for me and it feels like there is no easy way (at least that I’m aware of) to identify which column/property is the culprit.  I also recently just started working with Entity Framework (I had always preferred not using an ORM prior to this) and figured this would be handled by the framework.   After a bit of digging, I found this in the Ef Core documentation:

Entity Framework does not do any validation of maximum length before passing data to the provider. It is up to the provider or data store to validate if appropriate. For example, when targeting SQL Server, exceeding the maximum length will result in an exception as the data type of the underlying column will not allow excess data to be stored.

Even if I use the [MaxLength(#)] or [Required] attributes, Entity Framework does not look at these prior to submitting changes to the database.  So it looks like it’s up to the developer to solve for this given EF Core is a cross-platform framework capable of working with multiple data stores, each with their own rules.  As a result, I created the extension method below to perform my own validation:

public static async Task<int> SaveChangesWithValidationAsync(this DbContext context)
{
	var recordsToValidate = context.ChangeTracker.Entries();
	foreach(var recordToValidate in recordsToValidate)
	{
		var entity = recordToValidate.Entity;
		var validationContext = new ValidationContext(entity);
		var results = new List<ValidationResult>();
		if (!Validator.TryValidateObject(entity, validationContext, results, true)) // Need to set all properties, otherwise it just checks required.
		{
			var messages = results.Select(r => r.ErrorMessage).ToList().Aggregate((message, nextMessage) => message + ", " + nextMessage);
			throw new ApplicationException($"Unable to save changes for {entity.GetType().FullName} due to error(s): {messages}");
		}
	}
	return await context.SaveChangesAsync();
}

Note:  We could also implement IValidatableObject on our model classes and perform more unique validation inside the Validate method and have it covered by this extension method as well.

This method can then be called from within your code as follows:

public class Person 
{
  [MaxLength(50)]
  public string Name { get; set; }
  [MaxLength(50)]
  public string Title { get; set; }
}
    
var person = new Person() { Title = "This title is longer than 50 characters and will throw an exception" };
context.PersonSet.Add(person);
await context.SaveChangesWithValidationAsync();