Getting Started with the Salesforce Streaming API in .NET Core

Introduction

Have you ever wished you could get a notification whenever a record type or subset of records changed in Salesforce? You may wish to be able to be notified of those changes for replicating data or to trigger behavior or an external business system.

Enter the Streaming API from Salesforce. This feature consists of several subfeatures which enable you to just that. Opting in to certain changes is possible with Push Topics. Getting a complete list of changes of everything is possible with Change Data Capture. We’re going to talk about both in this post and how you can integrate your .NET applications with Salesforce to capture those changes in near real-time.

If you want to see a full breakdown of the Stream API components and capabilities, you can do so here. Also, please be sure your account meets the criteria for the Streaming API. If you have a developer sandbox from https://developer.salesforce.com, you should already be set.

Finally, I recommend you download the Postman collection I created for Salesforce here, and also the sample .NET Core application I created for connecting to the Streaming API.

CometD, Bayeux Protocol, and Long Polling

The Streaming API is possible thanks to a clustered web messaging technology known as CometD. This event messaging system is built on top of the Bayeux protocol which transmits messages over HTTP. When you connect your .NET Core application to the Salesforce Streaming API, you are not constantly checking for new messages every so often. You are actually establishing an open (long-polling) connection, and you are waiting/listening for those changes to be pushed out to you.

Below is a diagram that shows the various connections that are made when establishing a connection to the Streaming API. The client first performs a handshake to establish a long polling connection. If successful, it sends through a subscription to the channel you want to subscribe to (more on this below). A channel represents which type of event notifications you wish to receive. Once the channel subscription is open, the long-polling sequence takes over and you maintain this connection listening for changes.

One advantage the Streaming API has over the Outbound Messages feature from Salesforce, is that you can run this from behind a firewall on your network without exposing an HTTP endpoint for Salesforce to push records to. You can read more about CometD, Bayeux Protocol, and Long Polling on the Salesforce documentation site.

Streaming API Limits

Just like with our previous posts on the REST API and on the Bulk API, Salesforce enforces limits around how many events can be captured per 24 hour period with the Streaming API. The limits are in place because Salesforce is a multi-tenant application, and resources are shared among customers. Constraints are considered beneficial because they promote good software design, and when you are developing your applications to connect to the Streaming API, you should be cognizant of the limits in place (which may vary from environment to environment, or customer to customer).

What is Change Data Capture?

Change Data Capture, which is set to formally be deployed in the Spring 19 (tentatively set at Feb. 8, 2019) release, is currently in developer preview.

Below is an example payload you would receive when subscribe to a Change Data Capture channel when one of those entity types changed:


A few points are worth calling out in the payload json which gets sent to us are worth talking briefly about. The first is the entity type (1) which was changed. Change Data Capture allows you to subscribe to certain channels, such as /data/AccountChangeEvent for just Account changes, or in the case of the payload above all changes via /data/ChangeEvents (5). More on subscription channels for Change Data Capture is available here.

The next piece of information we get is the changeType (2) which outlines what type of operation happened. We’re also given just the data which changed (3) so we can limit the size of our payload and know what exactly needs to be updated.

Finally, we’re given a replayId (4) which can be used as a sort of audit trail for this change. Salesforce supports the notion of durable messages which means that these events are saved on the platform, and can be ‘replayed’, for a period of time (1 – 3 days). When you establish your ‘subscription’ in the connection diagram above, you can also provide a replay value. A value of -1 (default) means you want just new messages on this channel. A value of -2 means you want everything available (be aware of your api limits and processing time required for this). A value of X (a prior replayId) means you want everything after a certain event in time you had captured (assuming it falls in the availability window).

I won’t be discussing replays in this post, partly because I’m still working on updating my sample code to support them, but if you want to learn more about them you can see a nice write-up here.

Setting up Change Data Capture Events

Inside your development org, head over to Settings > Integrations > Change Data Capture. Here you will see a screen that looks similar to the screenshot below. Listed are two columns: Available entities which you can subscribe, and Selected entities to which you have already subscribed to. Select a few examples, such as Lead, Account, and Contact. You can also subscribe to changes to custom objects you have created as well.

For each standard entity you selected, you will be able to retrieve those events via /data/<Type>ChangeEvent channel. Custom objects are available via a /data/<CustomObject>_ChangeEvent channel. If you want to subscribe to all changes for all of your selected entities, the /data/ChangeEvents channel will allow you to do that.

Subscribing to Change Data Capture Events

I’ve created a sample project which connects to the Salesforce Streaming API and set it up on Github which you can use as a starting point to begin receiving notifications. You’ll need to be sure to update the appsettings.json file with your relevant values. As an important aside, please be sure to use user secrets for any development and environment variables or azure key vault for any production configuration to ensure you are safely maintaining credentials for your orgs.

Once you’ve updated your settings, you can run the application. I’d encourage you to step through the code as well, which first establishes an OAuth token with your credentials, then performs a channel subscription to the channel you provided, which I’ve defaulted to /data/ChangeEvents to see all changes. If you have trouble connecting, be sure you can connect to the REST API using Postman and then try again.

As you can see in the screenshot above, we’ve managed to successfully connect to our org (1), perform a handshake to establish a long-polling connection (2), and designate our channel subscription (3, 4). Once the application began waiting for changes, we updated a Lead record and changed the company field to “American Banking Corp2” which you see outlined in our change data capture payload (5).

That about wraps it up for Change Data Capture. There is certainly a lot more to explore, but if you are looking to replicate every change to certain record types in your org for replication or other purposes, you can obtain that information using this feature of the Streaming API. There is also a great Trailhead module on Change Data Capture if you want to apply this badge to your Trailhead account and learn more.

What are PushTopics?

Push Topics are a way to receive more granular notifications based on a SOQL query you define. If you’re not familiar with SOQL, it is very similar to SQL but with a few minor differences around how records are joined. You can see a few examples here. Salesforce analyzes record changes against your SOQL query, and based on criteria outlined below, determines if the record should be pushed out to you.

Another parameter to consider when developing your Push Topic SOQL query, is the NotifyForFields value. Your options for this are: All, Referenced (default), Select, and Where. All means you want all field changes pushed when the record matches the WHERE clause of your query. Referenced means you would like only changes to fields in your SELECT and WHERE, for all records which match the WHERE clause. Select indicates you would like only changes to fields in your SELECT statement, for all records that match your WHERE clause. Where is just like the Select behavior, but instead of basing changes on fields in the Select, it monitors changes for fields denoted in the Where clause only.

Setting Up PushTopics

Thankfully, it’s possible to create Push Topic records with the REST API. You can download the latest Postman collection which includes these calls. To create a push topic, we create a new sObject with a name (1), which we will later use to subscribe to our channel at
/topic/Chicago_Leads. We also provide a SOQL query (2) which indicates Name, FirstName, LastName, and Industry in the fields (Id is always required) and a where clause looking for Leads just in Chicago. We also designate a NotifyForFields parameter of Select which means we’ll get push notifications when the FirstName or LastName change (since they are in the Select fields) but only for leads in Chicago.

Once the push topic has been created we get an Id back (4) which we can use for updating this topic (if needed). One point to also mention here, is we also included the Name field in our query. Name is a compound field, and Salesforce has some special rules around notifications for compound fields like name and address when creating push topics.

Subscribing to PushTopics

Updating our appsettings.json file in our demo application to set the channel to “/topic/Chicago_Leads”, and connecting to the channel (1), then in our browser editing a lead to change the Industry, where that lead had the City = ‘Chicago’, will result in a streaming API event being pushed to us (2).

Another way we can test our Push Topics, is to use the Salesforce Workbench utility. After logging in, we navigate to Queries > Streaming Push Topics. Here we can create a new test Push Topic (there is no option to specify the NotifyForFields parameter, so assume Referenced), and we can view the stream of events/messages that are happening.

Salesforce has a nice Trailhead module on Push Topics and the Streaming API if you’d like to learn more about this subject.

Summary

To recap, the Streaming API is a great way to connect an externally hosted application or dataset to the Salesforce system to keep that system up to date with changes that are happening in Salesforce. There are a number of options for pushing just a certain predefined set of records with Push Topics, or getting a fire hose of all the changes with Change Data Capture, the Streaming API enables us as developers to keep systems in sync with each other.

 

Processing Large Amounts of Data in Salesforce with Bulk API 2.0

Running into limits with your Salesforce REST API usage, or simply proactively looking to process a large amount of data in Salesforce? Due to the limits in place on how many calls you can make to the REST API on a per 24-hour period that may limit how much data you can process an alternative approach is needed. Thankfully, with the Bulk API, you can process up to 100 million records per 24-hour period, which should (hopefully) be more than enough for you.

If you need help authenticating with the Salesforce API and getting up and running, or using Postman to manage requests and collections, I’ve written about those both previously. Also, if you would like a way to generate lots of sample data to test with, I’ve written about the Faker JS library which could help with that too. (Note: Faker is available on other platforms also: .NET, Ruby, Python, and Java).

As a final note, you can download the Postman collection I’ve been using for my Salesforce series on Github if you want to follow along that way.

Create the Job

The first step in creating a Bulk API request is to create a job. As part of the job request, you specify the object type, operation (insert, delete, upsert), and a few additional fields. In this example, we’re using the upsert operation paired with the External_Id__c field. If you don’t have have this external id field on your Lead object, you can see my REST API post where I talk about working with external keys which is very important when connecting external business systems. More information about the request body is available here.

Once the response is returned to us, we can see we’re given a job id, as well as some additional information about the job, including the job’s state, which is currently set to Open. Open state means the job is ready to begin accepting data. We also have a Postman “Test” attached to this request, which will save the {{job-id}} parameter which is returned and can then be used in future requests.

As a bonus, you can also view your bulk data jobs in the Salesforce web administration panel. Just head to Setup > Environments > Jobs > Bulk Data Load Jobs’ and you can see your currently open jobs as well as any jobs you completed in the last 7 days.

Checking the Job Status

A common request when working with bulk jobs, is to check the status of the job. We’ll do this now, and see the job is still in the ‘Open’ state, ready to accept data, but in the future when we use this, we may see our job in any of the following states: Open, UploadComplete, InProgress, JobCompleted, Aborted, or Failed.

We’re also given the contentUrl parameter here, which is where we want to post our actual CSV data.

Uploading the Data

Once we are armed with an ‘Open’ job, ready to accept data, we’re ready to start sending data to the job. We do this by sending a PUT request to the contentUrl from our job. In the screenshot below you can see we’ve set our Content-Type to text/csv, and are sending the CSV data in the request body. Here you can match fields for your job’s object type (Lead in this case). If you were sending something like Contact records, you could also specify a field for Account.External_Id__c (assuming you had external id setup on Account) to properly link the contact records to the corresponding Account object.

There are also limits to be aware of when sending your CSV data to the job. The current limit is 150 megabytes of base64 encoded content. If your content is not already base64 encoded, consider limiting your size to 100 megabytes since Salesforce converts your request to base64 upon receipt and this can result in up to a 50% increase in size.

Marking the Job Complete

We briefly mentioned job state earlier when talking about job status. Once you have finished sending the batches to your job, you will need to mark your job completed by setting the status to UploadComplete. If you for some reason didn’t want to finish processing the job and wanted to discard it, you could also set the job state here to Aborted. Once the job is marked as UploadComplete, Salesforce will shortly thereafter mark the status as InProgress, and shortly after that, as JobCompleted or Failed.

Obtaining the Job Results

Salesforce makes two endpoints available to obtain the results of your bulk job’s successful and failed records. Once your job’s status is JobComplete, you can make a request to /jobs/ingest/{{job-id}}/failedResults to obtain the records that failed to process, including an example of why those records were unsuccessful. You’ll always get back a sf__Id and sf__Error in your response so you can handle these in your application appropriately. Similarly, you can also send a request to /jobs/ingest/{{job-id}}/successfulResults to obtain the records which were successfully processed. Inside the success records, you’ll also receive a sf__Id and also a sf__Created property which indicates if the record was created (or modified).

Summary

In this post we discussed the Salesforce recommended way to work with larger datasets. The Salesforce REST API is great for handling transactional records, or even working with up to 25 records at a time with the composite and batch REST endpoints, but for larger recordsets, up to 100mb in size, the preferred way is using the Bulk API. We’ve done a pretty good job (in my apparently not so humble opinion) covering the Bulk API features, but be sure to review the official docs if you want to learn more.

 

Understanding OAuth 2.0 Web Server Flow In Salesforce

Introduction

There are three common ways to authenticate with the Salesforce API. Username/Password flow, User-Agent flow, and Web Server flow. There are subtle but important differences for each of them, so let’s briefly discuss what each of them does:

Username/Password Flow – Works on a secure server, and assumes the developer/application already has a set of API-enabled credentials they are working with.

User-Agent Flow – Works on client apps that reside on a client’s device or browser. This is preferred for Javascript applications where secret storage can not be guaranteed.

Web-Server Flow – Works on a secure server, but does not have credentials for the user available. Assumes application can safely store the app secret.

In this post, we’re going to work on how you can test and develop against Salesforce, using the Web-Server Flow, locally on your machine. As an added bonus, we’ll look at how to make those urls public with ngrok.

Salesforce Setup for Connected App

In your Salesforce developer account, navigate to Settings > Apps > App Manager. Click “New Connected App” in the top-right corner, and provide the following settings. It should look similar to the screenshot below when you are finished.

Connected App Name: OAuth2 Demo
API Name: OAuth2_Demo
Contact Email: <enter_your_email_address_here>
Enable OAuth Settings: checked
Callback URL: https://localhost:5001/salesforce/callback
Selected Oauth Scopes: Access your basic information (id, profile, email, address, phone)

Once you’ve finalized the setup of your connected app, be sure to make note of the ‘Consumer Key’ and ‘Consumer Secret’ which will be used in the sample project.

Sample Project Setup

I’ve posted a sample project on Github that you can download here to follow along. You’ll need to update your appsettings.json file with the client-id (Consumer Key) and client-secret (Consumer Secret) from your connected app you defined earlier, but that should be all that is necessary to run the demo application, even without knowledge of .NET.

After running the application, you’ll see the following in the browser

The link that is generated here comes from SalesforceClient.cs. This client factory takes in your appsettings.json settings and formulates them into a URL that the user is redirected to. Embedded in it the client-id for your application. There are a lot of additional options, such as state data you want passed through, display options, and more that you can set within this link outlined here.

After the user authenticates with Salesforce, they are prompted to allow your application access to the scopes you defined in the connected app. Your app name is also presented to the user. If the user selects the ‘Allow’ button, they will be redirected back to the URL you specified in the ‘redirect-uri’ parameter you specified. The URL will look something like: https://localhost:5001/salesforce/callback?code=aWekysIEeqM9PiThEfm0Cnr6MoLIfwWyRJcqOqHdF8f9INokharAS09ia7UNP6RiVScerfhc4w%3D%3D

The code parameter in the URL is the piece we are most interested in. This authorization code allows us to call the oauth2/token endpoint with the grant_type set to authorization_code. You can see an example of this in the SalesforceClient.cs file as well. If you’ve reached this point, congratulations, you now have an access token to use to make API requests. I’ve written about all the great things you can do with the Salesforce API here.

Bonus: Make a public facing URL with Ngrok

Ngrok is a tunneling application that allows you to forward public facing urls to local urls on your workstation. After downloading it (and adding to your operating system’s path which I’d recommend), run the following command:

ngrok http 5000

This will give you a window similar to the one below. Note how we are using the insecure, 5000 port. If you really want to forward to the SSL endpoint, there is documentation on how to achieve this here: https://ngrok.com/docs#tls-cert-warnings but I won’t be going over this. Suffice it to say, you may run into a 502: Bad Gateway if you do this.

Once you’ve done this, you can update your connected application configuration in Salesforce and replace the https://localhost:5001/salesforce/callback URL with the new ngrok URL you have here (i.e. https://4218e857.ngrok.io/salesforce/callback). You’ll also need to update this in your appsettings.json file for your application. This new URL will forward to port 5000 on your machine that you set on the command line when you ran the ngrok executable.

The bonus of doing this, is that you can share this URL with your customer, or your project manager, to give them a ‘preview’ of how the application will by providing them with a public facing URL. If you want to learn more about ngrok, Twilio has a nice write-up here.

 

Introduction to the Salesforce REST API (using Postman)

This post is going to be a rather lengthy introductory course on the Salesforce REST API. If you’re just looking for the Postman collection, or would like to just follow along, click here. We’ll discuss authentication, basic read operations, SOQL queries, batch & composite queries, and querying with an external key. We’ll also touch on the Salesforce workbench.

In future posts, I’ll discuss creating, updating, and deleting data with the REST API. I’m also planning posts on the Bulk API and Streaming API. If you aren’t already familiar with the Salesforce ecosystem, I’d encourage you to read this post first. Additionally, you’ll need Postman on your machine to get the most benefit from this post.

Salesforce Setup

Let’s get started. If you haven’t done so already, you’ll need to setup a developer account on https://developer.salesforce.com.

Once you’re logged in, from the gear icon in the top-right, navigate to Setup. From the ‘Quick Find’ box on the left, type in “App Manager” and select the menu item with the same name. On this screen, you’ll see a number of pre-built connection points. Let’s add our own by selecting “New Connected App” in the top-right.

Enter the following information and click “Save”:

Connected App Name: Postman Client
API Name: Postman_Client
Contact Email: <enter_your_email_address_here>
Enable OAuth Settings: checked
Callback URL: http://localhost/callback
Selected Oauth Scopes: Access and manage your data (api)

Once this is setup, you’ll be notified that it takes a few minutes for Salesforce to finalize your setup. Make a note of the “Consumer Key” and “Consumer Secret” that are listed on the app screen. The final piece of information you need is a security token. Navigate to the profile settings (see below) and select the Reset My Security Token item and follow along with the instructions to obtain one we’ll use for logging in.

A quick word about security tokens. These are not needed (and may be preferable to not use for better security) if you are on a network with a static IP and can white-list that address inside the ‘Settings > Security > Network access’ menu item. If you have this set you can skip setting the security token for the rest of the article. You could also just use the token now, and then change your approach in production. I am just using this approach to allow everyone to follow along.

Postman Setup

If you haven’t done so already, please import the Postman collection by downloading it here. You can either download and extract the zip, or copy/paste the raw text. Pressing ctrl+O or navigating to File > Import in Postman will let you perform either to import the collection. When you’re done, it should look similar to this:

You’ll also need to setup an ‘Environment’ for Salesforce, which you can see int the top-right of the screenshot above. You’ll need to set the values for your client-id, client-secret, and also your username/password. I strongly encourage storing these values in the Environment instead of in the Postman request so they can be shared securely with others. Below is an example of what your Postman environment for Salesforce should look like when it’s done.

Be sure you’re also creating the fields for instance-url and access-token above even though they are empty. More on this below.

Authentication

Ok, so with the setup ceremony out of the way, let’s start having fun. Below is an example authentication request to Salesforce. Note this is a POST request, sent x-www-form-urlencoded with a set of key/value pairs which are coming from our environment (in the top-right as Salesforce). Also, please observe how the {{password}} and {{security-token}} fields appear right next to each other concatenated.

Something we’re also going to review here is the “Tests” tab of this request. Tests are a fantastic feature in Postman, and they are worth learning more about. For the sake of this post, we are just going to use the Test feature to set our environment variables for {{access-token}} and {{instance-url}}. This is slick, because now we taken our json response, parsed it, and stored pieces of the response into our environment and can reuse that data automatically in our future requests.

API Limits

I think it’s important to make our first ‘real’ request to the Salesforce API as a request to obtain the ‘limits’ for your account. API limits are an important concept in Salesforce as this is a SaaS/multi-tenant environment and you are sharing resources with other users. Note below, we are using our {{access-token}} as stored from our authentication test.

The limits you see here are for a 24-hour period. Our developer account is limited to 15,000 API requests per day, and we have 14,999 remaining, having used our first one to inquire about the limits for our account. A keen eye will also note our “Postman Client” app we defined earlier, has a limit range but nothing is set. Apps can have their own API limit quotas potentially as well, and may be something a Salesforce admin sets for your app.

Limits encourage good software design and should be embraced. You’ll want to consider limits and supporting application design principals to ensure your application can work within those limits. One quick note is if you are looking to modify a large amount of data, take a look at the Bulk API which I will discuss in a future post.

Basic Queries

The format for single-object queries in Salesforce is typically: /services/data/<version>/sobjects/<objectType>/<salesforceId>. An example of this is below. The last part of this query is a Salesforce Id, which is a 15-character case-sensitive key, and will never change, even if the record is deleted, then later undeleted.

For the example below, the record identifier is likely different for your system, so be sure to head to the Sales app inside your Salesforce portal and filter the Accounts screen to view all, and select one from the list. Salesforce always puts the identifier for the current record in the URL, so you can grab one from your instance there. (ex: https://na85.lightning.force.com/lightning/r/Account/0011U000006ee9iQAA/view)

Something you may see in the future, is querying with a custom object or a custom property. Salesforce differentiates standard and custom objects with a “__c” postfix on object types and properties which are not standard. If I had created a custom object called “MyCustomObject” I would query this with “/MyCustomObject__c/” in the URL. We’ll see this again when we talk about querying with external keys later in the post.

Additionally, when querying an object, you can filter the fields which are returned. Do you see how appending the fields querystring parameter allows us to narrow down the resulting record to just the pieces of information we’d like to retrieve? The smaller payload here will result in improved performance over the wire and also faster deserialization time in our applications.

SOQL Queries

Once you’re comfortable with basic querying in Salesforce, a good next step is the SOQL query syntax. SOQL is often used inside the Salesforce ecosystem inside their Apex language programming paradigm, but we can also leverage it here for REST calls as well. As you can see (1) the language is very SQL-like in nature with the one big difference (to me) being joins.

The result set we get back from the query has our results (3) and also a few interesting fields to note (2). The first is the totalSize and done parameters. If the value for done is false, you will also see a top-level property called nextRecordsUrl that will look like: /services/data/v44.0/query/01g2100000PeqVoAAJ-2000/ (yours will be slightly different). You would then take this URL and use it to retrieve records 2001 – 4000 by re-running this request with the new URL and applying your same SOQL query to the end of this nextRecordsUrl parameter.

Querying with an External Key

Most often when you are using the Salesforce REST API, you are doing so to connect it to an external business system which has it’s own record identifiers/keys. Salesforce understands this notion, and allows you to create properties on your objects which are of type “External Id”.

To setup one of these fields, navigation to Setup (1) > Object Manager (2) > Select your relevant record type, Lead in this case (3) > Fields & Relationships (4) and then add a new record of type text (5) as the type of parameter for your key.


On the following screen specify a name for your field, and an API/Field name will be auto-generated for you (1). Note that anything which is a custom field or custom object type in Salesforce will always be referenced with “__c” at the end of it. So in a bit, we’ll see how this field “External_Id” actually maps to “External_Id__c” in the API. Finally, the checkbox at the end (2) indicates that this field is a unique identifier for a record in an external system.

Now that we have our External Id field setup, we can perform queries against it. You can also use this to update records as well which I’ll talk about in the future, including how to reference parent/child records using the external id on each (i.e. Accounts + Contacts). See how instead of using the Salesforce Id in the request, we’ve included the External_Id__c property in the URL and then referenced the record which has a value in this field of 21456.

Batch & Composite Queries

Salesforce offers two ways to execute multiple ‘requests’, which they refer to as subrequests, via a single API request in the form of batch queries & composite queries.

Let’s start with batch queries as shown in the screenshot below. In our payload (1), you’ll see we can provide multiple subrequests in an array. The batch query format supports up to 25 subrequests per API request, and each subrequest executes independently. Each subrequest also counts against our API limit noted earlier. The advantage here is we reduce our round-trip processing time and improve the speed of our applications.

In our response we can see any errors were returned (2) and the status code for each independent subrequest/response (3) along with the payload that would be returned. The batch API can also be used to perform PATCH, POST, and DELETE to modify your Salesforce data, which I’ll discuss in the future.

Composite queries are similar to batch queries, but different in a few key ways. First, the individual component subrequests do not count against your API limits. Second, the subrequests are executed in order.

Also, note how the response object from a prior query (1) can later be referenced in a subsequent query (2) in the chain. Each response from a composite query is wrapped in a body object (3). Similar to the batch API, you can also use this to create, update, and delete records instead of just querying it.

Salesforce Workbench

There is a great utility application available at: https://workbench.developerforce.com/. This portal allows to you do the same operations we described above when prototyping your applications with Postman. Not only do you gain access the REST API in a nice visual interface, but you can also test SOQL queries, subscribe to Push Topics (to be notified when data changes in Salesforce), view standard and custom object metadata, and more.

Below is an example of the “REST Explorer” in the workbench, available by going to Utilities > Rest Explorer after logging in. You can click through each of these endpoints and see everything Salesforce exposes via the API.

Final Thoughts

Salesforce provides developers with a well designed, properly versioned, full-featured API to use to connect your applications to their ecosystem. There are multiple ways to read as well as modify data inside the Salesforce system that allow you to work in a performant and responsible way that allows all users of the platform to maintain similar performance in their requests as well. If you enjoyed this post, I’d love to hear how it helped you. You can find me on Twitter and Github. Also, I’d love if you subscribe to my semi-regular newsletter of fun things I’ve found online.

 

Statically Typed Newsletter: Volume 1

Inspired by Tim Ferris’s 5-bullet Friday and Scott Hanselman’s Newsletter of Wonderful Things, I’ve decided this would be a great place for me to share some of the humorous, educational, insightful, or just generally weird things I found online. It could be videos, book recommendations, gadgets, recipes, and so on.

My plan is to make this an email-based newsletter. Do not be afraid weary internet traveler, your information is not being sold to anyone else, nor am I going to try to enroll you in my time-share vacation home program. It’s just a place for me to share what I personally find interesting enough to share with other like-minded individuals, such as yourself.

If you like what you see, please sign-up for newsletter at the bottom of the page. Without further ado, below is an example of what you would receive with each volume:

If you’ve made it this far, thanks for reading. I plan to share more in the future, so please opt-in to the newsletter below if you’d like to be notified when I do. Also, I’d love it if you shared this list with a friend or co-worker.

 

Improving Model Validation for Entity Framework Core 2.0

If you’ve done .NET programming against a SQL data store for any length of time, you’ve likely run into the following error:

String or binary data would be truncated. The statement has been terminated.

This is an unfortunately common scenario for me and it feels like there is no easy way (at least that I’m aware of) to identify which column/property is the culprit.  I also recently just started working with Entity Framework (I had always preferred not using an ORM prior to this) and figured this would be handled by the framework.   After a bit of digging, I found this in the Ef Core documentation:

Entity Framework does not do any validation of maximum length before passing data to the provider. It is up to the provider or data store to validate if appropriate. For example, when targeting SQL Server, exceeding the maximum length will result in an exception as the data type of the underlying column will not allow excess data to be stored.

Even if I use the [MaxLength(#)] or [Required] attributes, Entity Framework does not look at these prior to submitting changes to the database.  So it looks like it’s up to the developer to solve for this given EF Core is a cross-platform framework capable of working with multiple data stores, each with their own rules.  As a result, I created the extension method below to perform my own validation:

public static async Task<int> SaveChangesWithValidationAsync(this DbContext context)
{
	var recordsToValidate = context.ChangeTracker.Entries();
	foreach(var recordToValidate in recordsToValidate)
	{
		var entity = recordToValidate.Entity;
		var validationContext = new ValidationContext(entity);
		var results = new List<ValidationResult>();
		if (!Validator.TryValidateObject(entity, validationContext, results, true)) // Need to set all properties, otherwise it just checks required.
		{
			var messages = results.Select(r => r.ErrorMessage).ToList().Aggregate((message, nextMessage) => message + ", " + nextMessage);
			throw new ApplicationException($"Unable to save changes for {entity.GetType().FullName} due to error(s): {messages}");
		}
	}
	return await context.SaveChangesAsync();
}

Note:  We could also implement IValidatableObject on our model classes and perform more unique validation inside the Validate method and have it covered by this extension method as well.

This method can then be called from within your code as follows:

public class Person 
{
  [MaxLength(50)]
  public string Name { get; set; }
  [MaxLength(50)]
  public string Title { get; set; }
}
    
var person = new Person() { Title = "This title is longer than 50 characters and will throw an exception" };
context.PersonSet.Add(person);
await context.SaveChangesWithValidationAsync();
 

Introduction to Salesforce for Developers

Many people have heard of Salesforce.  Right now, you’re probably thinking of a web portal used by sales people to find new customers, maintain accounts, and track opportunities, and you’d be right to think that.   Salesforce is a great platform for CRM, but it’s so much more than that.

There are plenty of articles out there on the web tracking the meteoric growth of Salesforce as a company.  One reason for this, is that Salesforce is becoming a full-blown platform, allowing developers (or even savvy business users) to create workflows, business processes, and more using an entire ecosystem of tooling. The company champions the philosophy of the fourth industrial revolution, which follows the digital revolution of computers, and is set to unify and connect people, devices, and data in unprecedented ways.

The core of Salesforce’s business is indeed CRM with the primary product being sold as Sales Cloud. You can sign-up for a free sandbox right now if you’d like at https://developer.salesforce.com. Sales Cloud will allow you to manage leads (new prospects, not yet customers), manage accounts (customers) and the contacts working there. There are features around creating business processes, validation rules, reporting, and more to assist in the sales process.

In addition to Sales Cloud, Salesforce offers a number of additional products:

  • Service Cloud – support and case management for your customers.
  • Marketing Cloud – outreach, email marketing, and journey building
  • Commerce Cloud – solutions for both B2C (formerly Demandware) and B2B (formerly CloudCraze) commerce
  • Heroku – cloud based application management, similar to Azure or AWS.
  • Mulesoft – unify and connect all your disparate business systems into a single solution.
  • Community Cloud – page & content builder for forums, portals, and websites.
  • Quip – workplace collaboration (documents, calendars, chat)
  • Trailhead – learn about Salesforce and earn points/badges.

Salesforce is a multi-tenant SaaS application, meaning multiple organizations will share the same instance. Think of this as renting office space in a downtown sky rise instead of building your own office in the suburbs. When you rent the space, the landlord (Salesforce) is taking care of everything for you in terms of water, electricity, security, mail delivery, and likely offers several on-site services such as a dry cleaning. In a similar fashion, Salesforce is providing you with hosting, application monitoring, security, API extensibility, and much more by building your application inside their environment. It’s less for you to worry about as a developer.

You aren’t limited to the core objects of each platform and can create your own data types to extend and enhance the system. You can also create your own siloed set of business functionality as well. Let’s use a contrived example that I want to manage my video game collection on Salesforce, which is about as far from “sales” as one could get.   Inside the web portal, I can create my objects (Games, Publishers, Platforms) specifying the field types, validation rules, any parent-child relationships.  Once my data types are setup, I can enter records right away using the Salesforce UI.  Additionally, my data is available via a REST or SOAP API automatically.  It’s also available for me in the Salesforce mobile app.  I didn’t need to do any programming to setup the basic CRUD behaviors, security, logging, or anything else for these new objects, everything exists there for me.   Sure, I can extend the application (more on that later), but a lot of the tedious boilerplate work is already taken care of for me.

If I want to write applications that work with this data I created, I can do so using a programming language known as Apex, which is syntactically very similar to C# or Java.  I can write database queries inside this programming language to query my data using SOQL (Salesforce Object Query Language) which is very similar in syntax to SQL.  If I want to insert/update/delete data in Salesforce I can use DML (Data Manipulation Language).  I can also search this data using SOSL (Salesforce Object Search Language) which has a similar syntax to Lucene for anyone who is familiar with the popular search framework that powers Elasticsearch/Solr.  Development is either done inside the ‘Developer Console‘ in Salesforce or can be done with an extension for VS Code.

As if the full development environment were not enough to convince you, there is also a full-featured learning portal, Trailhead, which gamifies the learning process, allowing you to earn badges and points for completing tutorials and even allows you to complete hands-on exercise and checks those exercises for you.  There are exercises for every type of user (business user, admin, or developer) with varying degrees of difficulty that allow to get a quick start on the basics, but also continue to learn and grow to become an advanced user.  Once you’re comfortable developing on the platform, Salesforce even offers several certifications to show to prospective clients that you are indeed qualified and capable.

There are a large number of courses available to developers on both Pluralsight and Udemy (and others I’m sure) for anyone who is looking to learn more about Salesforce development.  Salesforce has a large community following of very passionate customers as well.  If you were to look at any job board, you would see Salesforce skills are in high-demand, and growing rapidly along with the company itself.  They also host a number of conferences they host each year, including Dreamforce in San Francisco.

I really hope this post gave you a good high-level overview of what Salesforce offers.   In the future, I plan to dive into more specifics of the Salesforce platform as I continue to learn all it has to offer.   Two key areas I’m really interested in learning more about are Einstein Analytics, Salesforce’s AI/BI intelligence platform, Mulesoft, and Commerce Cloud.

 

Create Your Own .NET Core Templates in 4 Easy Steps

Do you ever develop prototypes, or starter projects/accelerators, that you’d like to use again in the future? A good way to do that is by creating custom templates for dotnet. Once completed, anytime you want to create a new project of that type in the future, you can use key in “dotnet new ” and you’re off, complete with correct namespaces. You can even do conditional checks, or variable replacements.

1. To start, clone or download the MyGameStartup project which will make following along easy. There is nothing special about this, it’s just for the purposes of showing how you can bundle multiple projects, and do some variable replacements. Note: You’ll need .NET Core 2.1 SDK for this at a minimum. Once open, you’ll see a solution file, and a few simple projects with some basic Entity Framework Core behavior.

2. Note that in the downloaded project, there is a folder labeled “.template.config” with a file inside it labeled “template.json”. Let’s review the contents of that file. Feel free to update this file with values you plan to use for your own project type, but you can leave these if you want to just continue with the demo.

I’ve added comments to the file so it is easy to understand, along with a link to the official documentation. There are more features than I am covering here if you wish to add them. A few noteworthy settings in the file are the “shortName” property. This is the trigger for your project, i.e. ‘dotnet new mygamestartup’. Also, if you see the section labeled “symbols”, and then cross-reference this with the projects ‘appsettings.json’ file, you’ll see the connection string’s database will be replaced with the “–db” parameter once we run it.

3. Ready to add this new project type to your available list? I’ll also show you how to remove it if you no longer plan to use this one. Run the following command, from the same folder as the downloaded .sln file and .template.config folder: “dotnet new -i .”. The dot here refers simply to the current directory. You could have also specified the path, but this is easier. If successful, you should see a list of project types, along with your project type added.

New project added screenshot from ConEmu terminal

Woohoo! You now have your own custom accelerator template you can use for other projects. Side note: This screenshot is from the ConEmu terminal which I use. You can download it here.

An important note also on this step. Running the “dotnet new -i .” command will bundle the current folder. I may not have discovered the setting yet, but keep in mind this skips empty folders. So if your wwwroot folder was empty, you will see this is not created after installing. I added a site.css to my version to avoid this.

4. We’re moving along quickly now. Next step… fire up a new project with our new template! Navigate the directory you’d like your project to be a folder within, such as “C:\Users\myusername\source\repos” and then issue the command, “dotnet new mygamestartup –db MyCustomStartupDB -n MyCustomName“. The database name will be set in the connection string in appsettings.json and a folder will be created named MyCustomName. Your projects namespace will also be MyCustomName. Note: If you get an error about a lock, be sure your Visual Studio instance with “MyGameStartup” project you downloaded in step 1 is closed. You may need to remove the MyCustomName folder and try again.

Follow the steps in the README to create the database with the name you specified, and also steps on how to run the Entity Framework Core Update-Database command properly. After that, you’re done! You just created a new project from a template you previously created.

Optional: To Uninstall the project, you can use the “dotnet new -u” command to list the currently installed projects. You should see your “MyGameStartup” project listed here. You can simply remove it with the command “dotnet new -u “Path-To-Folder” such as shown below.

Next Steps: A few steps I haven’t taken yet, but may in the future, is to add the following features to my template:

  • Create a nuget package to easily distribute your new package.
  • Add a new ‘symbol’ to the template.json which has datatype ‘choice’. A choice symbol will allow the user to pick from several options when creating your template.
  • Modifiers can be used to conditionally output certain code or not. You may wish to optionally include authentication as an example. More details here.
 

Exploring Video Games With the New IHttpClientFactory In .NET Core 2.1

REST services are everywhere. It’s tough to find an application that doesn’t leverage an externally hosted REST service in some way. Prior to .NET Core 2.1, a common library that was used to perform REST requests was RestSharp. I love RestSharp, but let’s explore the new alternative IHttpClientFactory that became available as part of .NET Core 2.1.

In this demo, we’re going to cover a bunch of awesome features:

  • Simple dependency injection
  • Named instances
  • Typed clients
  • Message handlers

In addition, for this service, we’re going to leverage the Internet Game Database API to pull in some data about Nintendo Switch games. As an aside, if you don’t already own a Nintendo Switch, it’s a fantastic device for the family, for travel, or just for fun. If you want to follow along with the code, you’ll need to create a free account here. Once you have your API key, you’ll want to be sure to include that in the code. There is a host of functionality available with this API, and you could use it to start building your video game collection, embed information about games in your apps, and much more.

You may also wish to download the source code for this. It is available on Github here.

Basic Usage

To start, let’s open Visual Studio 2017 and create an ‘Empty’ project denoting ASP.NET Core 2.1 as the framework.

Once we’re up and running, let’s add a few lines to our Startup.cs class. It should look like this when we’re done. Specifically, note the new services.AddHttpClient(); extension method on IServiceCollection which is exposed in 2.1.

Next, let’s create a GameController and a Game class for the purposes of deserialization. Notice how we inject the IHttpClientFactory into the controller constructor? We then use the CreateClient() method of the factory inside the controller action to create a basic instance. A noteworthy feature for IHttpClientFactory is that it maintains a pool of reusable clients, each with a lifetime of 2 minutes by default. This keeps the overhead involved with connectivity and instantiation at a more manageable level by default which is great for our overall performance and scalability.

Named Instances

In the above code, we are defining the HttpClient inside the controller class, but it’s also possible to have a “named” instance of the HttpClient. We could re-write our line from our Startup.cs class above to use this code instead:

Then, inside our controller, we would inject the IHttpClientFactory the same way, but we would change the CreateClient() method to use our named instance.

Typed Clients

Cool stuff so far, right? Let’s take it one more step further: typed clients. Let’s add the interface and class below to our application. Note how we have a private HttpClient as part of our IgdbClient object which exposes developer friendly names for each of the Igdb.com API endpoints we can consume in our application. Note: There are plenty more fields available in this API, including artwork, screenshots, and more but we are just keeping it simple for the demo.

Finally, let’s modify our Startup class to inject our configured client for our application, and also modify our GameController class so that it exposes these two new methods for testing purposes. We’re configuring the Startup class to inject our application specific configuration (so we can distribute this class to other applications which each have their own API key, and we’re also injecting IgdbClient this into the controller.

Message Handlers

The last feature we’re going to discuss is message handlers. Handlers can be attached to clients to monitor messages as they go in, or messages as they go out. You could perform logging for messages or duration, request validation, and more. Handlers can be chained as middleware as well. We’re going to keep it simple for our demo, and simply verify that the user-key which is required for the Igdb.com API is specified for the request before it goes out for our application. Let’s add the following class to our app:

Let’s also modify our Startup class, specifically the ConfigureServices method so that the following appears after we registered our IgdbClient.

Now, if we try to call the /game/id or /game/popular endpoints without having specified our user-key, this will result in a 400 Bad Request error before it is even attempted to be sent to the Igdb API. This potentially saves us API usage limits and cost as well as improves our performance by failing before the attempt is made.

 

Adding AppSettings.json Configuration to a .NET Core Console Application

This post will walk you through the steps necessary to add a strongly typed appsettings.json configuration file to your console application. I’ll dispense with the formalities of writing a detailed introduction on why you would be interested in doing this and jump right into the steps involved.

Note: This post assumes .NET Core 2.1, but I believe any 2.x version will work.

1. Add the following NuGet packages to your console application:

2. Right-click your console application, and select “Unload project“. Once the project has been unloaded, right click the project and select “Edit <your_console_project_name>.csproj“.

3. Add the following just below the <TargetFramework> element:
<UserSecretsId>93d033dc-12eb-446a-91a6-efec88dfb437</UserSecretsId>
(note: you should replace this guid with your own)

4. Add the following to a <ItemGroup> with other CliToolReference elements, or add your own:

5. Right-click your console application and select the “Reload project” option.

6. Open the developer command prompt and navigate to the directory with your console application, then run:

Note: This will create a secrets.json file located at: %APPDATA%\Microsoft\UserSecrets\<SecretId_From_Step_3>\secrets.json
Edit the newly created file and change the contents to:

7. Create a new JSON configuration file for your console application and name it appsettings.json. Set the following to the contents of the file to the JSON below. Important: Be sure to right-click this file and select properties. Change the “Copy to Output Directory” option to “Copy if newer”.

8. Create a class named MySettingsConfig:

9. Add the following code to your Main routine inside Program.cs

If your console shows your settings correctly, congratulations! If you’re missing the AccountName and ConnectionString settings, be sure you followed the step to “Copy to Output Directory” listed in step 7.