SQL Server

Scalable and Performant ASP.NET Core Web APIs: SQL Server Isolation Level

This is another post in a series of posts on creating performant and scalable web APIs using ASP.NET core 2.0. In this post we’ll have a little look at isolation levels in SQL Server which is something that is often not thought about … until we hit performance issues …

By default, SQL Server uses the “Read Committed” isolation level and by default this blocks reads whilst updates take place. This can lead to scalability issues if we have lots of users reading data in addition to users writing data – particularly when the writes are heavy.

Let’s look at a simple example of simulating some users updating a contact record whilst other users get contact records.

We’ll start of with getting a benchmark for users just getting contact records. Here’s our Dapper data access code to get a contact record:

Here’s a snipet of results from our load test on the API endpoint:

GET benchmark

On to our data access code for updating a contact record … We’ve simulated a long database write by using a Thread.Sleep resulting in the updating taking > 1 sec.

Ok, lets load test some users updating a contact record whilst other users are getting the contact record:

GET requests blocked

We see that the GET requests take far longer than they were previously because they are being blocked by the PUT requests.

So, what can we do? How can we resolve this? It is tempting to use the “Read Uncommitted” isolation level. This appears to resolve the problem ….

… but what if the SQL did more database writes after the UPDATE on the Contact table and in the process errored and rolled back? We’d be potentially reading and returning incorrect data!

There is a much simpler solution … We can change the “Read Committed” isolation level behaviour to not block and instead read the current committed data. We change this on the database using the following SQL command:

So, no code changes – sweet!

If we now do the same load test again, the GET requests are much faster because they are not being blocked by the the PUT requests:
GET requests faster

In summary, having READ_COMMITTED_SNAPSHOT ON can generally lead to better performance without losing any accuracy in the data.

Don’t forget to subscribe to the rest of this series in the footer of this page!

Scalable and Performant ASP.NET Core Web APIs: ORM Choice

This is another post in a series of posts on creating performant and scalable web APIs using ASP.NET Core. In this post we’ll start to focus on data access within our API which is the often the cause of performance bottlenecks. We’ll start off with chosing our object relationship mapping framework / library (ORM)

Be careful with EF Core

Entity Framework Core nicely encapsulates the database details from developers. However this approach does carry some performance overheads and gotchas.

Let’s look at an example. Here’s our database model:

Database Model

We have a controller action method /contacts/{contactId} that returns a contact record.

Here’s the SQL Profiler trace:

EF Core SQL Profile - Example 1

Where’s the WHERE clause!? EF Core has fetched all the data in the Contact table too early because of the .ToList(). If we remove the .ToList(). EF Core behaves a little more as we expect:

EF Core SQL Profile - Example 2

Let’s add some more requirements for our action method … We now want to include the primary address, primary email address and primary phone number in /contacts/{contactId}.

The implementation is below. We’ve been good EF Core developers and used .AsNoTracking() so that no unnecessary change tracking occurs. We’ve also been specific about what fields we want to retreive so that EF Core doesn’t fetch more data than we need.

Here’s the SQL Profiler trace:

EF Core SQL Profile - Example 3

We have 4 calls to the database when a single SQL statement could have done the job.

So, with EF core, we need to know what we are doing and be really careful if we are implementing an API on top of a decent sized database. There are also cases when EF Core is going to generate inefficient SQL or round trip to the database multiple times – that’s just how it works!

Here’s a great post on other performance gothas in Entity Framework. It’s not specifically about EF Core but many of the points do apply.

Dapper helps build performant data access code

Dapper is a micro ORM that doesn’t abstract the SQL from us. In Dapper, we use regular SQL to access the database. Dapper simply maps the data to our c# objects.

Here’s the data access code using Dapper for the example above.

Here’s the SQL Profiler trace confirming only 1 hit on the database:
Dapper SQL Trace

So, our ORM choice for the rest of this series on creating performant and scalable web APIs using ASP.NET core 2.0 is Dapper. We’ll continue looking at data access in our next post – this time focusing on isolation levels and the impact on performance.

Don’t forget to subscribe to the rest of this series in the footer of this page!

Scalable and Performant ASP.NET Core Web APIs: Profiling and Monitoring

Monitoring

This is the 2nd post in a series of articles on creating performant and scalable web APIs using ASP.NET Core 2.0. In this post we’ll focus on tools that can help us profile and monitor our API so that we can spot any performance and scalability issues before our customers do.

Often, like most bugs, the earlier in the development cycle we find a performance or scalability problem, the quicker and easier it is to fix. So, it is important to make use of these tools from the start of the development cycle as well as when the API is in production.

During our dev cycles, as well as checking we get the right status code, response body, … we should check the duration of the call and the size of the response. We’ll then hopefully spot if the API is slowing down during the development phase.

Below is the popular Postman tool, giving us the duration and size of the response.

Postman

Development Profiling Tools

Apart from keeping an eye on the duration and size of the API calls, what else can we do? What other tools are there in the development phase to give us confidence that the API is going to perform and scale well?

Stackify Prefix

Stackify Prefix is a great free profiling tool that can be used during the development of an ASP.NET Core Web API. It tracks web requests and database queries and we can even wire it up to Serilog. It’s really useful to have this on a 2nd screen as we build out our API.

To get started with Prefix and profile our API, first download Prefix and then bring in the following nuget package:

We then need to add the Stackify middleware before the MVC middleware. It also needs to come before any exception logging middleware as well so that the exceptions appear in Prefix:

If we want to include the Serilog logs in Prefix, we need to bring in the following nuget package:

… and add the following bits in appSettings.json:

To run prefix, we need to click on the icon in the task bar, enable it and then open it in a browser:

Run Prefix

If we then run our ASP.NET Core Web API in Visual Studio, Prefix will start to profile our API. We will see the requests, database queries and other information we’ve written to Serilog:

Prefix

Nice!

Application Insights in Visual Studio

Application Insights is Mircosoft’s fully fledged Application Performance Monitoring (APM) tool (we’ll come on to APMs later in this post). We can run this locally in Visual Studio 2017 to profile the API during development as an alternative to Stackify Prefix. It’s convenient because we are already likely using Visual Studio to develop the API.

To add this to a Visual Studio project, we right click on project in Solution Explorer and click “Add > Application Insights Telementry …”. We then click the “Start Free” button and then click “Or just add the SDK to try local only mode” at the bottom of the screen. The Application Insights SDK will then be added to the solution.

Application Insights, will automatically track our web requests – we don’t need to add any middleware. Unlike, Prefix, it doesn’t track database queries – we’d need to track these in our logger and wire Application Insights up to our logger (it is automatically wired up to the standard ILogger).

After we’ve added Application Insights to our solution in Visual Studio, we can view the trace information by going to “View > Other Windows > Application Insights Search”. If we enter a date and time in the “From” and “To” inputs and click the search icon, we will hopefully see trace information for the API calls in the bottom half of the screen. The actual individual trace items are in the middle column. If we click on a trace item, we get additional details of the item to the right of it. The element that I find really useful in the “Track Operation” section on the bottom right of the screen. This gives us an overview of the API call and we can quickly see what bits are slow and need further investigation.

Application Insights

“Application Insights Trends” is great for giving us an overview of the trace information. We can access this in Visual Studio via View > Other Windows > Application Insights Trends.

Personally, I prefer the UX of Prefix, but Application Insights is worth a look.

SQL Server Profiler

As the name suggests, SQL Server Profiler traces the SQL statements executed on a SQL Server database. This is a tool that comes with SQL Server and obviously is only useful if our API uses SQL Server for its storage! If we are using a different database then there is likely to be an equivalent profiler.

We can use this tool if our higher level profiling tool (like Prefix or an APM) has pointed to a problem in the data access code. This gives us a clear understanding of what SQL is being executed along with the associated costs.

This tool can also give us all the activity from the SQL Server – it may not be our API that is problematic – it may be some other process that is hogging the resources.

As well as the duration of each SQL statement, we can get other useful performance related information like the amount of CPU in milliseconds used by the statement, the number of page read I/Os caused by the statement and the number of page write I/Os caused by the statement.

SQL Profiler

Visual Studio Memory Profiler

Visual Studio has a set of low level profiling tools. The one that I find most useful is the memory profiler which lets us take snapshots of the managed and native memory heap and drill in to the differences between the snapshots. This can help us find memory leaks or just inefficient use of memory in our code.

We can switch the memory profiler on by clicking “Debug / Windows / Show Diagnostic Tools”.

Visual Studio Memory Profiler

We won’t be using this every day – just when our higher level profiling tool points to a problem in a specific area of code that we want to profile a little deeper.

Application Performance Monitoring (APM) Tools

APMs are primarily used to profile our code in production with regards to performance. However, these tools are also useful in the QA environment – particularly if we are running a load test and want to measure how different parts of the API perform.

Stackify Retrace

Stackify Retrace is a service that allows us to track the same as Stackify Prefix tracks, but it tracks it in production. It also tracks other metrics on the server such as CPU usage. Retrace can also notify us when certain events happen – e.g. when CPU usage > 90%.

In addition to adding the same middleware to our API code as we did for Prefix, we need to register with Stackify and download and install an agent. The agent will send the profile information to Stackify for us to view in their portal.

As with most APMs, this is a paid service which starts from $10 per month at the moment.

Application Insights

As mentioned before, Application Insights is Microsoft’s fully fledged APM tool. As well as wiring up locally, we can wire this up to the Azure Application Insights service. This allows us to view the information in the Azure Portal.

Azure Insights

The nice thing about this APM is that there is a free usage tier for up to 1GB worth of data. Worth a look – particularly if our API is hosted in Azure.

So, there we have a range of high level and low level profiling tools that work well with ASP.NET Core 2.0. In the next post we’ll look at load testing tools …

Don’t forget to subscribe to the rest of this series in the footer of this page!

Scalable and Performant ASP.NET Core Web APIs: Introduction

Performant and scalable web APIs

Over the next month or so I’ll be posting a series of posts on creating performant and scalable web APIs using ASP.NET Core 2.0.

Performance is how fast our API can deal with a single request and make a response. Scalability is the amount of concurrent requests our API can deal with before it slows down significantly.

ASP.NET core itself is very focused on performance, which makes it a great choice for building APIs that perform well. v2.0 of .NET Core gave us general performance improvements on commonly used areas such as many of the collection classes, LINQ, compression and text processing. You can read more about the improvements here. In order for us to create APIs that scale well as more users use the API, we’ll need to architect our API well with usage growth in mind.

The series will start with how to profile and load test our API from a performance point of view. This is really important if we want to know we have a performance problem well before our customers tell us!

We’ll then move on to data access. ORMs are very popular these days but they can be the cause of performance issues …

We’ll look at paging, filtering and searching which improve the consumers experience of our API as well as hopefully having a positive performance impact.

We’ll have a good look at caching … allowing the client to cache responses as well as having a gateway cache on our server.

We’ll move on to allowing our API to work asynchronously to see how that effects performance and scalability.

We’ll look at a Microservices architecture and how that can help scalability.

We’ll end up on the topic of large volume imports – a common requirement in line of business applications that store lots of data.

Ensure you don’t miss any of these posts by subscribing to this blog in the footer of this page!

Controlling our ASP.NET Core Serilog Log

In a previous post we built up a Serilog log full of useful information. In this post we’re going to see if we can control exactly what is logged, without a software deployment. By default, we want our logs to be fairly light and have the ability to increase the amount of logging when we are diagnosing a problem.

So, we have an ASP.NET Core Web API in production, hosted in IIS. We also have the middleware running from this post that gives us additional properties in our log like “RequestPath” and “UserName”. Here’s our appSettings.json:

If we GET /api/contacts/f7d10f53-4c11-44f4-8dce-d0e0e22cb6ab we get:

Default log

If we increase the level to “Error”, we no longer get the “Information” entries. We will only get exceptions logged in our log. We don’t need to restart the app pool in IIS – the change kicks in as soon as we save appSettings.json. Nice!

If we decrease the level to “Debug”, we get a lot more log entries.

Debug Log

So, switching to the “Debug” level is going to be valuable when we are diagnosing a problem. However, that’s a lot of log entries from all over our web API – can we be more specific on what we want to log? That’s where filter expressions come in …

In order to use filter expressions, we need to reference the following nuget:

Having changed the MinimumLevel to “Debug”, let’s try to add a filter so that we only get “Debug” entries on GET api/contacts/{contactId} whilst still getting errors logged. Let’s change appSettings.json to:

Unfortunately for the filter to kick in, we do need to restart the app pool in IIS. After doing this, we do get all the log entries for that particular path:

Filtered Log

The filter expression uses a powerful SQL like syntax. The filter expression docs give the full details.

Another example of using a filter expression to control our logs is if a particular user is having a problem. The following appSettings.json will give all the log entries for “Carl” as well as any other errors.

Cool stuff!

Instrumenting Dapper Queries in ASP.NET Core

In my last post we touched on what we get from EF Core in our Serilog log entries. But what if we are using dapper? How can we trace the SQL and get query execution times?

We already have our Seriog outputting to SQL Server and our middleware from the last post.

Unfortunately, we don’t get any logging out of the box for dapper like we do with EF Core – so, we need to do a bit of work.

We’ll create a class that wraps the dapper methods we use and we’ll instrument those methods. We turn stats on for the connection and get them after the query has finished, passing them to a central method that does the logging.

The LogInfo() method logs the SQL, parameters and useful stats to Serilog. We could have just logged all the stats by doing .ForContext("Stats", stats, destructureObjects: true) but the log output wouldn’t have been as neat.

The AddAdditionalInfoToException() method makes sure we have the full SQL and parameters in the exception that has occurred ensuring that they are together in our log.

Now, to use our DapperWrap class, we need to register it in Startup.

We can then inject DapperWrap where we need it. I’ve used it in a simple controller below:

We then get the following in our serilog log:

Dapper log

… with the following properties:

For an exception we get:

Dapper exception

… which includes the full SQL and parameter in the properties:

Nice!

Adding Useful Information to ASP.NET Core Web API Serilog Logs

In my last post, we setup structured logging leveraging Serilog and SQL Server. In this post, we’ll look closely at the log entries themselves for a web API – what comes out of the box, what’s missing and how we can fill the gaps …

A good log saves debugging time – particularly in production, by quickly helping us to pin point the root of a problem. A log containing a wealth of relevant information, reduces the amount of “I can’t reproduce” issues.

What we get out of the box

Let’s look at what we get out of the box then. We’ve already enabled Serilog with SQL Server as the destination. Let’s now tell Serilog to log as much as it can in appSettings.json:

We have a super simple controller for a contact resource in our web API that sits on top of an EF core data context. We only have methods for getting a record and inserting a record. This will be fine for allowing us to have close look at the log entries that are generated.

We also have an even more simplistic authentication controller that signs us in as “Carl” if we POST api/auth

After signing in and calling GET api/contacts/{contactId} we get:
Out of box log

That’s quite a bit of information, but let’s drill in more …

These are the properties we get from the request:

Notice that the request headers and body are missing. There is no information about the signed in user.

These are the properties we get from the response:

Notice that the response headers and body are missing. Again there is no information about the signed in user.

These are the properties we get from an EF core query execution:

Notice that the parameter value is missing – we just get “?”.

Let’s create an exception by adding a field in our data model class that doesn’t exist in the database:

Here’s what we get when GET api/contacts/{contactId}:

EF core exception

If we drill into the 2 highlighted entries, we have the following properties:

… and …

So, it’s great that we have the call stack and SQL but the parameter value for the SQL would have been nice.

What’s missing and why?

So, the key information that is missing is:

  • The request headers
  • The request body
  • The response headers
  • The response body
  • The EF core query parameter values
  • The user name

I don’t know for sure why the above are missing but I’m guessing it’s a combination of security and performance.

In terms of security, we need to secure the logs. Irregardless of adding more sensitive data, they already contain sensitive information about the internals of our app. We’ve taken a great first step by putting them in SQL Server because a person will need a valid SQL logon in order to be able to access the log directly. If we expose the log via a web page, we need to make sure that page is authenticated and also protected by an appropriate authorisation rule. So, if we secure the log properly I think we are good to add the missing information.

In terms of performance, let’s do a bench mark performance test now and do the same again after we’ve added the missing information to determine the impact. We’re going to use Rick Strahl’s excellent Web Surge tool for the test.

Here’s the results of the bench mark test where we repeatedly GET api/contacts/{contactId} for 10 seconds.

So, the average request time is 3.2ms when the app is running in IIS.

Before we go on and add this stuff, why do we need these bits of information? The information will help us pin problems down. The request headers and body are particularly important for web APIs – it’s much easier to reproduce a problem with a request, if we have all the information to replay it. If the problem is receiving unexpected data rather than an unexpected exception then the response headers and body give us all the information we need to work back through the request to determine what happened. In terms of the EF core queries, we can’t replay an exact query without the parameter values – e.g. a performance issue may be dependent on specific parameter values. Finally some problems are user specific, so capturing the user, allows us to quickly get all the log entries for the problematic user.

Adding EF core parameter values

This is a quick one. All we need to do is switch on “EnableSensitiveDataLogging” when the data context is configured.

This now includes the parameter value in the log entry:

Adding missing request and response information

This is a little more work. We’re going to use middleware so that we only have to write the code once (rather than add code to each controller and action method).

We’re going to call our middleware, SerilogRequestLogger. So, let’s register this when the app starts up. It is important that this is just after we add authentication in the middleware pipeline or we won’t have access to the user information.

Now on to the implementation of the middleware. We add the user name to all the log entries by using LogContext.PushProperty. You can see capturing the request and response body is a little tricky because they are streams.

Here’s an example of what we get in our request information log entry for a POST to api/contacts:

… and our EF call:

… and our response information:

Dealing with exceptions

So, far so good – we have added some extra useful information and this is included in exceptions as well (even though I’ve not shown it above). We can go a bit further though …

We can improve the experience of the consumer of the API by giving them an error reference that they can quote when raising a support ticket. We need to wrap an exception handler around line await _next(httpContext) in our middleware:

Here’s what the consumer gets when they hit a problem in our code:

Consumer exception

We can quickly get to that entry in our log by writing a SQL query to filter on the errorId property:

How much slower is our middleware?

To finish this post off, let’s see the performance impact of logging all this extra stuff. Let’s again repeatedly call GET api/contacts/{contactId} for 10 seconds.

Perf with middleware

The average request time was 3.6ms. So, roughly an addition of 0.4ms for this test.

Not that much slower!

Recommended reading for building great ASP.NET Web APIs:

ASP.NET Core Logging with Serilog and SQL Server

Serilog is a great 3rd party library for structured logging in our ASP.NET core apps. Structured logging is key to producing easily readable and filterable logs.

Using SQL Server as a log destination, allows us to leverage the power of SQL querying for log filtering. It can be a good choice if our app is already using SQL Server.

So, how do we implement a Serilog SQL log in ASP.NET Core 2.0 …

First we need to bring in the following nuget packages:

Next up, we need to change how logging is setup in Program.cs. The 3 lines at the start of Main() tells the program to use Serilog as the logger and to read the configuration from appsettings.json.

Serilog can then be configured to use SQL Server as a destination in appSettings.json, along with the minimum logging level. We need to specify the connection string for the logging database and the table name to log the data into.

We can get Serilog to create our Log table automatically but let’s do this outselves so that we have control of our schema. For example, we want the Properties column to be based on the xml data type so that we can query it (serilog creates this as nvarchar).

Below is TSQL script to create the Log table:

We can then write to the log in our code, logging structured objects using {@object} in the message template. Below is a web API action method for getting and returning a record (a contact in this case). We log when the contact is fetched from the cache / database and also when the contact is set in the cache.

Now that Serilog and SQL Server are setup, wired together and we have some logging code, we should get our logs output to our SQL Server table.

Get contact log

Note that if we have problems getting the log to write to SQL Server, we can use Serilog.Debugging.SelfLog.Enable() in Program.Main() to surface errors from Serilog.

The Properties column contains useful additional information, including what we specified in our structured log after the contact is added to the cache is:
Get contact log properties

We can extract specific data in the Properties column using XQuery syntax. We can even filter on values in the Properties column.

For example, if we wanted to find when a particular contact was added to the cache, we could use something like:

XQuery

Pretty useful!

Recommended reading for building great ASP.NET Web APIs:

Creating a Multi-Tenant ASP.NET Core Web API with SQL Server RLS

In this post we’re going to leverage SQL Server Row Level Security (RLS), Entity Framework Core and ASP.NET Core to create a multi-tenant database with a multi-tenant web API …

First, let’s create a database containing a couple of tables – one to hold our tenants and one to hold some real data (products in this example):

We’ll add some test data in there as well. We’ll have 2 tenants – one with 3 products and the other one with 2 products:

Before we move on to RLS, let’s create a “super user” that will have access to data in all the tenants (this is useful for debugging). While we are at it, we’ll create a “normal user” that our web API will use to access data:

Now on to the RLS. First we need to create a predicate function that is going to apply a filter when the product table is accessed for the given tenant. The tenant for a given row is passed into the function. The tenant that the API is operatoring for is stored in SESSION_CONTEXT('TenantId') – we’ll come on to how this is set in the web API soon. We also need to make sure our super user has access to all the rows in the table

Now we can bind our predicate function to our product table. The FILTER predicate ensures we only read data for a given tenant and the BLOCK predicate ensures we only write data for a given tenant:

So, let’s give this a quick test, connecting as normaluser:


We should get following results back (3 rows for the first query and 2 for the 2nd query):

If you connect as superuser, SELECT * FROM dbo.Product, you should get all 5 rows.

Cool, we’re done with our database. Let’s move on to our web API …

Let’s create a new ASP.NET Core Web Application in Visual Studio, choosing the Web API project template.

First let’s create our model classes to hold our tenants and products …

Now, we’ll bring in EF core using nuget (I’m using the stable 1.1.2 packages). We’ll need Microsoft.EntityFrameworkCore and Microsoft.EntityFrameworkCore.SqlServer.

Time to create our EF core data context. Here’s a basic class that maps our tenants and products:

We’re not done with DataContext yet – we need to set SESSION_CONTEXT('TenantId') for every query that EF core does … we’ll come back to this after we’ve got the tenant from the API.

So, let’s create some middleware that will inspect the API key from the request to determine the tenant:


Please note the highlighted line in the above code where we place the tenant in a dictionary in the HttpContext. This is how we are going to get the tenant in DataContext.

Before we go back to DataContext, let’s go and do all our registrations in Startup:

Time now to go back to DataContext.
We can set the tenant in our constructor:

We can then set SESSION_CONTEXT(‘TenantId’) every time a connection is opened before any queries are run by EF core.

Ok, we’re done with DataContext now. Let’s create a simple API now to get some products:

Let’s check this is working in Postman with our 2 API keys. We should get 3 products for the 1st key and 2 products for the 2nd key:

Cool, we are good. Let’s quickly add an action method to get a product by its id:

Now let’s create the action method in our API to post new products:


… and let’s give it a try:

… it blows up!

We obviously have a problem because we are not submitting the tenant id in the product.

I lied when I said we were done with our database! Let’s add a default value for the tenant id that picks up SESSION_CONTEXT(‘TenantId’):

Now, when you post a product in Postman, we should be ok:

To finish off our controller we’ll quickly implement a PUT and DELETE:

Pretty straightforward and they both work. If you try to PUT and DELETE products that belong to a tenant that your API key is not for then you get a 404 – just as we want.

The great thing about this approach is that there is no reference to the tenant in our controller code – the multi-tenancy is taken care for us lower down in the stack which reduces our development costs and risk of leaking data.

Recommended reading for building great ASP.NET core web APIs:

Upgrading a Product to SQL Server 2016: Things to Check

SQL Server 2016 has been released recently. How do you check if your product is compatible with SQL Server 2016? Here’s the process I usually go through …

Research

I start by doing some research – looking at the discontinued, deprecated, breaking changes and behavioural changes listed by Microsoft and searching the application and database for occurrences. If any occurrences are found, I’ll note them down and include them in the next stage of the process which is testing.

Here are the lists published by Microsoft …

Discontinued Features

Deprecated Features

Breaking Changes

Behavior Changes

Upgrade Advisor

I will also run the SQL 2016 Upgrade Advisor on the database. This should identify stuff from the lists above, as well as potential improvements. Again, I’ll note any recommendations down for the testing stage. Here’s the link to the Upgrade Advisor

Hardware and Software Requirements

The last bit of my research is to check the new hardware and software Requirements for SQL 2016. I’ll check these against the 2014 requirements to determine what’s changed. The main changes I’ve found are:

  • Required .NET version is now 4.6
  • Minimum processor speed is now x64 processor: 1.4 GHz. So, no more x86

I will then update my product’s hardware and software requirements accordingly.

Tests

I’ll then create a test environment with my application running on top of a SQL Server 2016 database. Before I start my testing, I’ll double check the following:

  • SQL’s stats are up-to-date: EXEC sp_updatestats
  • SQL’s compatibility level is correct (should be 130): SELECT compatibility_level FROM sys.databases WHERE name = @dbname

Problem areas of the app will be noted down and taken to the next stage which is forming a plan the changes.

Testing Discontinued Features, Deprecated Features, Breaking Changes and Behavioural Changes

I’ll start by testing the areas of the app that I noted down from the research – to double check that the app is impacted and to determine how wide spread and severe the impact is.

Automated Testing

I’ll then run the product’s automated UI tests on the SQL Server 2016 database.

Performance Testing

I’ll then carry out performance tests in key areas to compare against our 2014 benchmarks. Usually, performance is better, but if an area is significantly slower, I’ll treat that has a problem that needs resolving.

Testing Potential Improvements

I’ll then go on to testing suggested improvements by the upgrade advisor. Any significant improvements will be noted down and included in the action plan in the next stage.

Action Plan

I’ll then form a plan from the test results. Changes will be logged in our issue management system, prioritised, scheduled and eventually implemented.
Compatibility is declared when all the changes have been completed in a release.

What process do you go through when testing your products against new versions of SQL Server?

Connect with me:RSSGitHubTwitterLinkedIn