Adding Useful Information to ASP.NET Core Web API Serilog Logs

In my last post, we setup structured logging leveraging Serilog and SQL Server. In this post, we’ll look closely at the log entries themselves for a web API – what comes out of the box, what’s missing and how we can fill the gaps …

A good log saves debugging time – particularly in production, by quickly helping us to pin point the root of a problem. A log containing a wealth of relevant information, reduces the amount of “I can’t reproduce” issues.

What we get out of the box

Let’s look at what we get out of the box then. We’ve already enabled Serilog with SQL Server as the destination. Let’s now tell Serilog to log as much as it can in appSettings.json:

We have a super simple controller for a contact resource in our web API that sits on top of an EF core data context. We only have methods for getting a record and inserting a record. This will be fine for allowing us to have close look at the log entries that are generated.

We also have an even more simplistic authentication controller that signs us in as “Carl” if we POST api/auth

After signing in and calling GET api/contacts/{contactId} we get:
Out of box log

That’s quite a bit of information, but let’s drill in more …

These are the properties we get from the request:

Notice that the request headers and body are missing. There is no information about the signed in user.

These are the properties we get from the response:

Notice that the response headers and body are missing. Again there is no information about the signed in user.

These are the properties we get from an EF core query execution:

Notice that the parameter value is missing – we just get “?”.

Let’s create an exception by adding a field in our data model class that doesn’t exist in the database:

Here’s what we get when GET api/contacts/{contactId}:

EF core exception

If we drill into the 2 highlighted entries, we have the following properties:

… and …

So, it’s great that we have the call stack and SQL but the parameter value for the SQL would have been nice.

What’s missing and why?

So, the key information that is missing is:

  • The request headers
  • The request body
  • The response headers
  • The response body
  • The EF core query parameter values
  • The user name

I don’t know for sure why the above are missing but I’m guessing it’s a combination of security and performance.

In terms of security, we need to secure the logs. Irregardless of adding more sensitive data, they already contain sensitive information about the internals of our app. We’ve taken a great first step by putting them in SQL Server because a person will need a valid SQL logon in order to be able to access the log directly. If we expose the log via a web page, we need to make sure that page is authenticated and also protected by an appropriate authorisation rule. So, if we secure the log properly I think we are good to add the missing information.

In terms of performance, let’s do a bench mark performance test now and do the same again after we’ve added the missing information to determine the impact. We’re going to use Rick Strahl’s excellent Web Surge tool for the test.

Here’s the results of the bench mark test where we repeatedly GET api/contacts/{contactId} for 10 seconds.

So, the average request time is 3.2ms when the app is running in IIS.

Before we go on and add this stuff, why do we need these bits of information? The information will help us pin problems down. The request headers and body are particularly important for web APIs – it’s much easier to reproduce a problem with a request, if we have all the information to replay it. If the problem is receiving unexpected data rather than an unexpected exception then the response headers and body give us all the information we need to work back through the request to determine what happened. In terms of the EF core queries, we can’t replay an exact query without the parameter values – e.g. a performance issue may be dependent on specific parameter values. Finally some problems are user specific, so capturing the user, allows us to quickly get all the log entries for the problematic user.

Adding EF core parameter values

This is a quick one. All we need to do is switch on “EnableSensitiveDataLogging” when the data context is configured.

This now includes the parameter value in the log entry:

Adding missing request and response information

This is a little more work. We’re going to use middleware so that we only have to write the code once (rather than add code to each controller and action method).

We’re going to call our middleware, SerilogRequestLogger. So, let’s register this when the app starts up. It is important that this is just after we add authentication in the middleware pipeline or we won’t have access to the user information.

Now on to the implementation of the middleware. We add the user name to all the log entries by using LogContext.PushProperty. You can see capturing the request and response body is a little tricky because they are streams.

Here’s an example of what we get in our request information log entry for a POST to api/contacts:

… and our EF call:

… and our response information:

Dealing with exceptions

So, far so good – we have added some extra useful information and this is included in exceptions as well (even though I’ve not shown it above). We can go a bit further though …

We can improve the experience of the consumer of the API by giving them an error reference that they can quote when raising a support ticket. We need to wrap an exception handler around line await _next(httpContext) in our middleware:

Here’s what the consumer gets when they hit a problem in our code:

Consumer exception

We can quickly get to that entry in our log by writing a SQL query to filter on the errorId property:

How much slower is our middleware?

To finish this post off, let’s see the performance impact of logging all this extra stuff. Let’s again repeatedly call GET api/contacts/{contactId} for 10 seconds.

Perf with middleware

The average request time was 3.6ms. So, roughly an addition of 0.4ms for this test.

Not that much slower!

Recommended reading for building great ASP.NET Web APIs:

ASP.NET Core Logging with Serilog and SQL Server

Serilog is a great 3rd party library for structured logging in our ASP.NET core apps. Structured logging is key to producing easily readable and filterable logs.

Using SQL Server as a log destination, allows us to leverage the power of SQL querying for log filtering. It can be a good choice if our app is already using SQL Server.

So, how do we implement a Serilog SQL log in ASP.NET Core 2.0 …

First we need to bring in the following nuget packages:

Next up, we need to change how logging is setup in Program.cs. The 3 lines at the start of Main() tells the program to use Serilog as the logger and to read the configuration from appsettings.json.

Serilog can then be configured to use SQL Server as a destination in appSettings.json, along with the minimum logging level. We need to specify the connection string for the logging database and the table name to log the data into.

We can get Serilog to create our Log table automatically but let’s do this outselves so that we have control of our schema. For example, we want the Properties column to be based on the xml data type so that we can query it (serilog creates this as nvarchar).

Below is TSQL script to create the Log table:

We can then write to the log in our code, logging structured objects using {@object} in the message template. Below is a web API action method for getting and returning a record (a contact in this case). We log when the contact is fetched from the cache / database and also when the contact is set in the cache.

Now that Serilog and SQL Server are setup, wired together and we have some logging code, we should get our logs output to our SQL Server table.

Get contact log

Note that if we have problems getting the log to write to SQL Server, we can use Serilog.Debugging.SelfLog.Enable() in Program.Main() to surface errors from Serilog.

The Properties column contains useful additional information, including what we specified in our structured log after the contact is added to the cache is:
Get contact log properties

We can extract specific data in the Properties column using XQuery syntax. We can even filter on values in the Properties column.

For example, if we wanted to find when a particular contact was added to the cache, we could use something like:

XQuery

Pretty useful!

Recommended reading for building great ASP.NET Web APIs:

Getting started with Redis

Recently, I’ve been looking into how to improve the performance and scalability of ASP.NET Core web APIs. One strategy for doing this is caching data in a Redis server. Redis is popular open source key-value store that is often used for caching.

In this post, we’ll briefly cover how to get started with redis on windows.

Installation

For the installer, head to the latest windows redis release, download and run the latest MSI.

I’d recommend ticking the box to add redis to the PATH variable to make interacting with redis from the command line easier:
Redis PATH

Also, take note of the port that redis will run on:
Redis Port

Let’s quickly check it’s installed and running ok by testing it in the CLI:
Redis Test

We can also install Redis Desktop Manager which is a GUI management tool for Redis.

After we have installed redis desktop manager, we can connect to our redis server.
Redis Deskop Manager Connection

We can then see the key and value we set earlier in the redis CLI:

We can edit the value:
Redis Deskop Manager Edit Value

As you would expect, you can add new key-values as well
Redis Deskop Manager New Value

Why redis?

  • Redis is super fast – according to the docs, it can potentially perform up to 120,000 requests per second
  • Redis is very simple – you can store data with just a single set command and retrieve the data using the get command. There’s no defining of any tables etc
  • Redis is durable – it can write the data to disk …
  • Redis works on multiple platforms and many languages – including windows and ASP.NET Core!

We’ll wire a redis cache up to an ASP.NET Core web API in a future post …

Recommended reading for redis:

Search After Stop Typing React Component

In this post I’m going to go through how to implement a search criteria component that only invokes the search in a calling component after the user has stopped typing.

So, we have the following search criteria component that invokes the search on a submit button at the moment:


… and this component is referenced like the following in a calling component:

At the moment, our submit button triggers the form’s submit which in tern calls handleSubmit() which then calls this.props.doSearch in our calling component to invoke the search (our calling component actually does the search).

Here’s what our component looks like at the moment:

We want to remove the submit button and just have the search invoke when the user stops typing.

So, let’s first remove the form and submit button from our render function:

Now let’s just invoke the search in handleCriteriaChange after we have set the criteria in the state:

So, our component invokes the search on every keystroke that the user does:

Not quite what we want!

We need to “debounce” the search. That’s where the debounce function in the excellent lodash library comes in.

So, let’s bring lodash and the TypeScript types into our project:

We now need to import just the debounce function into our component:

If we look at the docs, debounce just creates another function that delays calling another function. So, let’s create our “debounce” function leaving the console.log() so that we can check that doSearch is only being called when the user has stopped typing:


… and let’s call our new function from handleCriteriaChange:


… and let’s give this a try:

… which is just what we want!

Here’s the full code for our debouncing search criteria component:

Recommended reading for building great React apps:

Creating a Multi-Tenant ASP.NET Core Web API with SQL Server RLS

In this post we’re going to leverage SQL Server Row Level Security (RLS), Entity Framework Core and ASP.NET Core to create a multi-tenant database with a multi-tenant web API …

First, let’s create a database containing a couple of tables – one to hold our tenants and one to hold some real data (products in this example):

We’ll add some test data in there as well. We’ll have 2 tenants – one with 3 products and the other one with 2 products:

Before we move on to RLS, let’s create a “super user” that will have access to data in all the tenants (this is useful for debugging). While we are at it, we’ll create a “normal user” that our web API will use to access data:

Now on to the RLS. First we need to create a predicate function that is going to apply a filter when the product table is accessed for the given tenant. The tenant for a given row is passed into the function. The tenant that the API is operatoring for is stored in SESSION_CONTEXT('TenantId') – we’ll come on to how this is set in the web API soon. We also need to make sure our super user has access to all the rows in the table

Now we can bind our predicate function to our product table. The FILTER predicate ensures we only read data for a given tenant and the BLOCK predicate ensures we only write data for a given tenant:

So, let’s give this a quick test, connecting as normaluser:


We should get following results back (3 rows for the first query and 2 for the 2nd query):

If you connect as superuser, SELECT * FROM dbo.Product, you should get all 5 rows.

Cool, we’re done with our database. Let’s move on to our web API …

Let’s create a new ASP.NET Core Web Application in Visual Studio, choosing the Web API project template.

First let’s create our model classes to hold our tenants and products …

Now, we’ll bring in EF core using nuget (I’m using the stable 1.1.2 packages). We’ll need Microsoft.EntityFrameworkCore and Microsoft.EntityFrameworkCore.SqlServer.

Time to create our EF core data context. Here’s a basic class that maps our tenants and products:

We’re not done with DataContext yet – we need to set SESSION_CONTEXT('TenantId') for every query that EF core does … we’ll come back to this after we’ve got the tenant from the API.

So, let’s create some middleware that will inspect the API key from the request to determine the tenant:


Please note the highlighted line in the above code where we place the tenant in a dictionary in the HttpContext. This is how we are going to get the tenant in DataContext.

Before we go back to DataContext, let’s go and do all our registrations in Startup:

Time now to go back to DataContext.
We can set the tenant in our constructor:

We can then set SESSION_CONTEXT(‘TenantId’) every time a connection is opened before any queries are run by EF core.

Ok, we’re done with DataContext now. Let’s create a simple API now to get some products:

Let’s check this is working in Postman with our 2 API keys. We should get 3 products for the 1st key and 2 products for the 2nd key:

Cool, we are good. Let’s quickly add an action method to get a product by its id:

Now let’s create the action method in our API to post new products:


… and let’s give it a try:

… it blows up!

We obviously have a problem because we are not submitting the tenant id in the product.

I lied when I said we were done with our database! Let’s add a default value for the tenant id that picks up SESSION_CONTEXT(‘TenantId’):

Now, when you post a product in Postman, we should be ok:

To finish off our controller we’ll quickly implement a PUT and DELETE:

Pretty straightforward and they both work. If you try to PUT and DELETE products that belong to a tenant that your API key is not for then you get a 404 – just as we want.

The great thing about this approach is that there is no reference to the tenant in our controller code – the multi-tenancy is taken care for us lower down in the stack which reduces our development costs and risk of leaking data.

Recommended reading for building great ASP.NET core web APIs:

HttpContext in ASP.NET Core

HttpContext is largely the same in asp.net core as it’s always been. However, one difference is that it is not automatically as available everywhere in your code base as it used to be …

If you are in a controller then you are okay – HttpContext is a property of ControllerBase:

If you are in middleware then you are okay – HttpContext is passed in as a parameter:

However, if you are in the domain layer, the you need to do some work …

First you need to register IHttpContextAccessor for dependency injection in the apps Startup class:

In your domain layer, you then need to bring in Microsoft.AspNetCore.Http via nuget and use dependency injection to get a reference to IHttpContextAccessor which in tern gives you a reference to HttpContext

Recommended reading for building great ASP.NET core web APIs:

Connect with me:RSSGitHubTwitterLinkedIn

Integrating Validation in Angular 2 and ASP.NET Core

I’m building an Angular 2 app with an ASP.NET core web API behind it and need to add some validation. Obviously, I need to validate on the server in the web API, but I also want to do some client validation to make sure the user experience is good …

Server Validation

Here’s my model, which you can see is using attributes to define some required field validation:

Here’s our controller below.

If the required fields aren’t filled in (detected from the attribute validation) a HTTP 400 status code is returned with the validation errors in the response body.

We also do some more complex validation, checking if the email address is already in use. Again we return a 400 and the validation error if the email address exists in our database.

So, let’s test our API in Postman.

Test that required fields are validated:

Test that a valid person is saved:

Test that a person with an email address already in the database is validated:

Wiring up our Angular 2 app

Now, that we have our web API in place, let’s build our page to submit a new person.

At this point we’re not going to implement any clientside validation – we’re just going to handle the validation coming from the server.

Here’s our component markup:

  • Lines 1-7 will show the validation errors from the server
  • Lines 8-10 will show confirmation that the person has been saved into the database ok
  • Line 12 defines our form, telling angular that the form object will be called personForm and submit handler will be a function called onSubmit
  • Lines 14-46 define our field inputs for the person
  • Lines 48-52 defines our submit button

Here’s our component code:

  • Line 14 defines a variable called errors which will be an array of validation errors
  • ngOnInit on line 19 creates our form object with no client side validation defined at the moment
  • onSubmit posts the person object to our web API. You can see we catch validation errors on lines 47-55, pushing them to the errors array

If you run the app and hit the Submit button without filling in the inputs, you get something like:

Client Validation

Now, let’s implement the client validation in the angular app.

First, we need to change ngOnInit to include the validation rules:

Then, we need to change our markup for the field inputs to output the validation errors. Here’s the markup for the title input:

  • Line 1 adds the has-error CSS class if we have touched and not filled in the input
  • Lines 13-16 displays the error message beneath the input if we have touched and not filled in the input

So, cool, if the user doesn’t fill in any of the inputs, validation errors will be shown before the submit button is pressed:

However, what if we enter a duplicate email address:

As you can see, the validation error from the server is shown which is good, but ideally we want to highlight the email input and display the error beneath it. So, we need to integrate our validation errors from the server into Angular’s validation.

The solution is in the highlighted lines below. We know the field name from the server’s validation error dictionary. So, if the field is on our form, we flag to angular that there is an error using the control’s setErrors function.

In our example, we don’t have any validation that is not bound to a single field (cross field validation), but if we did, the validation errors would be shown at the top of the page.