Building high-performance ASP.NET applications


If you are building public facing web sites one of the things you want to achieve at the end of the project is a good performance under the load for your web site. That means, you have to make sure that your product works under a heavy load (e.g. 50 concurrent users, or 200 users per second etc.) even though at the moment you don’t think you would have that much load. Chances are that the web site attracts more and more users over time and then if it’s not a load tolerant web site it will start flaking, leaving you with an unhappy customer and ruined reputation.

There are many articles on the Internet about improving the performance of ASP.NET web sites, which all make sense; however, I think there are some more things you can do to save yourself from facing massive dramas. So what steps can be taken to produce a high-performance ASP.NET or ASP.NET MVC application?

  • Load test your application from early stages

Majority of developers tend to leave performing the load test (if they ever do it) to when the application is developed and has passed the integration and regression tests. Even though performing a load test at the end of the development process is better than not doing it at all, it might be way too late to fix the performance issues once your code has been already been written. A very common example of this issue is that when the application does not respond properly under load, scaling out (adding more servers) is considered. Sometimes this is not possible simply because the code is not suitable for achieving it. Like when the objects that are stored in Session are not serializable, and so adding more web nodes or more worker processes are impossible. If you find out that your application may require to be deployed on more than one server at the early stages of development, you will do your tests in an environment which is close to your final environment in terms of configuration and number of servers etc., then your code will be adapted a lot easier.

  • Use the high-performance libraries

Recently I was diagnosing the performance issues of a web site and I came across a hot spot in the code where JSON messages coming from a third-party web service had to be serialized several times. Those JSON messages were de-serialized by Newtonsoft.Json and tuned out that Newtonsoft.Json was not the fastest library when it came to de-serialization. Then we replaced Json.Net with a faster library (e.g. ServiceStack) and got a much better result.

Again if the load test was done at an early stage when we picked Json.Net as our serialization library we would have find that performance issue a lot sooner and would not have to make so many changes in the code, and would not have to re-test it entirely again.

  • Is your application CPU-intensive or IO-intensive?

Before you start implementing your web site and when the project is designed, one thing you should think about is whether your site is a CPU-intensive or IO-intensive? This is important to know your strategy of scaling your product.

For example if your application is CPU-intensive you may want to use a synchronous pattern, parallel processing and so forth whereas for a product that has many IO-bound operations such as communicating with external web services or network resources (e.g. a database) Task-based asynchronous pattern might be more helpful to scale out your product. Plus you may want to have a centralized caching system in place which will let you create Web Gardens and Web Farms in future, thus spanning the load across multiple worker processes or serves.

  • Use Task-based Asynchronous Model, but with care!

If your product relies many IO-bound operations, or includes long-running operations which may make the expensive IIS threads wait for an operation to complete, you better think of using the Task-based Asynchronous Pattern for your ASP.NET MVC project.

There are many tutorials on the Internet about asynchronous ASP.NET MVC actions (like this one) so in this blog post I refrain from explaining it. However, I just have to point out that traditional synchronous Actions in an ASP.NET (MVC) site keep the IIS threads busy until your operation is done or the request is processed. This means that if the site is waiting for an external resource (e.g. web service) to respond, the thread will be busy. The number of threads in .NET’s thread pool that can be used to process the requests are limited too, therefore, it’s important to release the threads as soon as possible. A task-based asynchronous action or method releases the thread until the request is processed, then grabs a new thread from the thread pool and uses it to return the result of the action. This way, many requests can be processed by few threads, which will lead to better responsiveness for your application.

Although task-based asynchronous pattern can be very handy for the right applications, it must be used with care. There are a few of concerns that you must have when you design or implement a project based on Task-based Asynchronous Pattern (TAP). You can see many of them in here, however, the biggest challenge that developers may face when using async and await keywords is to know that in this context they have to deal with threads slightly differently. For example, you can create a method that returns a Task (e.g. Task<Product>). Normally you can call .Run() method on that task or you can merely call task.Result to force running the task and then fetching the result. In a method or action which is built based on TBP, any of those calls will block your running thread, and will make your program sluggish or even may cause dead-locks.

  • Distribute caching and session state

    It’s very common that developers build a web application on a single development machine and assume that the product will be running on a single server too, whereas it’s not usually the case for big public facing web sites. They often get deployed to more than one server which are behind a load balancer. Even though you can still deploy a web site with In-Proc caching on multiple servers using sticky session (where the load balancer directs all requests that belong to the same session to a single server), you may have to keep multiple copies of session data and cached data. For example if you deploy your product on a web farm made of four servers and you keep the session data in-proc, when a request comes through the chance of hitting a server that already contains a cached data is 1 in 4 or 25%, whereas if you use a centralized caching mechanism in place, the chance of finding a cached item for every request if 100%. This is crucial for web sites that heavily rely on cached data.

    Another advantage of having a centralized caching mechanism (using something like App Fabric or Redis) is the ability to implement a proactive caching system around the actual product. A proactive caching mechanism may be used to pre-load the most popular items into the cache before they are even requested by a client. This may help with massively improving the performance of a big data driven application, if you manage to keep the cache synchronized with the actual data source.

  • Create Web Gardens

As it was mentioned before, in an IO-bound web application that involves quite a few long-running operations (e.g. web service calls) you may want to free up your main thread as much as possible. By default every web site is run under one main thread which is responsible to keep your web site alive, and unfortunately when it’s too busy, your site becomes unresponsive. There is one way of adding more “main threads” to your application which is achievable by adding more worker processes to your site under IIS. Each worker process will include a separate main thread therefore if one is busy there will be another one to process the upcoming processes.

Having more than one worker process will turn your site to a Web Garden, which requires your Session and Application data be persisted out-proc (e.g. on a state server or Sql Server).

  • Use caching and lazy loading in a smart way

    There is no need to emphasize that if you cache a commonly accessed bit of data in memory you will be able to reduce the database and web service calls. This will specifically help with IO-bound applications that as I said before, may cause a lot of grief when the site is under load.

    Another approach for improving the responsiveness of your site is using Lazy Loading. Lazy Loading means that an application does not have a certain piece of data, but it knows that where is that data. For example if there is a drop-down control on your web page which is meant to display list of products, you don’t have to load all products from the database once the page is loaded. You can add a jQuery function to your page which can populate the drop-down list the first time it’s pulled down. You can also apply the same technique in many places in your code, such as when you work with Linq queries and CLR collections.

  • Do not put C# code in your MVC views

    Your ASP.NET MVC views get compiled at run time and not at compile time. Therefore if you include too much C# code in them, your code will not be compiled and placed in DLL files. Not only this will damage the testability of your software but also it will make your site slower because every view will take longer to get display (because they must be compiled). Another down side of adding code to the views is that they cannot be run asynchronously and so if you decide to build your site based on Task-based Asynchronous Pattern (TAP), you won’t be able to take advantage of asynchronous methods and actions in the views.

    For example if there is a method like this in your code:

    public async Task<string> GetName(int code)

    {

    var result = …

    return await result;

    }

This method can be run asynchronously in the context of an asynchronous ASP.NET MVC action like this:

    public Task<ActionResult> Index(CancellationToken ctx)

{

    var name = await GetName(100);

}

But if you call this method in a view, because the view is not asynchronous you will have to run it in a thread-blocking way like this:

var name = GetName(100).Result;

.Result will block the running thread until GetName() processes our request and so the execution of the app will halt for a while, whereas when this code is called using await keyword the thread is not blocked.

  • Use Fire & Forget when applicable

If two or more operations are not forming a single transaction you probably do not have to run them sequentially. For example if users can sign-up and create an account in your web site, and once they register you save their details in the database and then you send them an email, you don’t have to wait for the email to be sent to finalize the operation.

In such a case the best way of doing so is probably starting a new thread and making it send the email to the user and just get back to the main thread. This is called a fire and forgets mechanism which can improve the responsiveness of an application.

  • Build for x64 CPU

32-bit applications are limited to a lower amount of memory and have access to fewer calculation features/instructions of the CPU. To overcome these limitations, if your server is a 64-bit one, make sure your site is running under 64-bit mode (by making sure the option for running a site under 32-bit mode in IIS is not enabled). Then compile and build your code for x64 CPU rather than Any CPU.

One example of x64 being helpful is that to improve the responsiveness and performance of a data-driven application, having a good caching mechanism in place is a must. In-proc caching is a memory consuming option because everything is stored in the memory boundaries of the site’s application pool. For a x86 process, the amount of memory that can be allocated is limited to 4 GB and so if loads of data be added to the cache, soon this limit will be met. If the same site is built explicitly for a x64 CPU, this memory limit will be removed and so more items can be added to the cache thus less communication with the database which leads to a better performance.

  • Use monitoring and diagnostic tools on the server

    There might be many performance issues that you never see them by naked eyes because they never appear in error logs. Identifying performance issues are even more daunting when the application is already on the production servers where you have almost no chance of debugging.

    To find out the slow processes, thread blocks, hangs, and errors and so forth it’s highly recommended to install a monitoring and/or diagnostic tool on the server and get them to track and monitor your application constantly. I personally have used NewRelic (which is a SAS) to check the health of our online sites. See HERE for more details and for creating your free account.

  • Profile your running application

    Once you finish the development of your site, deploy it to IIS, and then attach a profiler (e.g. Visual Studio Profiler) and take snapshots of various parts of the application. For example take a snapshot of purchase operation or user sign-up operation etc. Then check and see if there is any slow or blocking code there. Finding those hot spots at early stages might save you a great amount of time, reputation and money.

Advertisements

Web API and returning a Razor view


There are scenarios in which an API in a WEB API application needs to return a formatted HTML rather than a JSON message. For example we worked on a project where APIs are used to perform some search and return the result as JSON or XML while a few of them had to return HTML to be used by an Android app (in a web view container).

One solution would be breaking down the controller into two: one inherited from MVCController and the other one derived from ApiController. However since those APIs are in a same category in terms of functionality I would keep them in the same controller.

Moreover,  using ApiController and returning HttpResponseMessage lets us to modify the details of the implementation in future without having to change the return type (e.g. from ActionResult to HttpResponseMessage) and also would be easier for us in future to upgrade to Web API 2.

The advent of IHttpActionResult In Web API 2 allows developers to return custom data. In case you are not using ASP.NET MVC 5 yet or you are after an easier way keep reading!

To parse and return a Razor view in a WEB API project, simply add some views to your application just like when you do it for a normal ASP.NET MVC project. Then through Nuget, find and add RazorEngine which is a cool tool to read and parse Razor views.

Inside the api simply create an object to act as a model, load the content of the view as a text data, pass the view’s body and the model to RazorEngine and get a parsed version of the view. Since the api is meant to return HTML, the content type must be set to text/html.

Image

In this example the view has a markup like the one given below:

Image

As it’s seen, the model is bound to type “dynamic” which let’s the view accept a wide range of types. You can move the code from your API to a helper class (or anything similar) and create a function which accepts a view name, a model and then returns a rendered HTML.

Creating a Log Visit Report


oops.. today I’d decided to post a new article but surprisingly noticed that the blog was suspended! Thanks to Matt, it is alive and kicking now!

Anyway, lets see how we can generate a log visit report for our ASP.NET web site. By the term of Log Visit, I mean the statistics of a website which shows at which time, which page (or file) has been viewed by who!

The good news is that IIS can record the above information for us. It can log all activities done by your web site in the following ways and formats:

W3C Extended log file format

World Wide Web Consortium (W3C) Extended format is customizable and is saved in an ASCII text file. You can select your desired fields among a lot of choices.

IIS log file format

IIS log file format has  fixed columns and is stored as an ASCII text file. This file includes more information than W3C format.

NCSA Common log file format

National Center for Supercomputing Applications (NCSA) Common log file format is a fixed ASCII format that is available for Web sites, but not for FTP sites

ODBC logging

Open Database Connectivity (ODBC) logging format is a record of a fixed set of data properties in a database.

Logging into text files is quicker than logging into a database. It does not have any performance pitfall but when it comes to reporting, you probably should import the text file into a database.

Logging by ODBC will slow down your website to some extent  because it records very much of (sometimes unnecessary) information. For example, in W3C format, only information about pages who have been loaded successfully is stored. While in ODBC method, all referred objects, like .js or image files, are recorded, no metter if the resource have been loaded successfully or not.

By the way, since ODBC will make this post shorter, I will choose ODBC method for my sample program.

To enable logging with ODBC, first you must create a table for storing log information and then create an ODBC data source thorough which IIS can store logs.

To create the log table, which is usually called INETLOG, go to System32\InetSrv folder and find LogTemp.Sql file. This file includes a script that creates the required table into your database. This script has been written for Sql Server, but you can modify it to be executed on other databases like MS Access or even My Sql.

In the next step, create a SYSTEM DNS that points to the above table. You must create a System DNS because IIS will use the DNS with a user different from the one you have logged into your computer (usually NetWork Service account).

To create a new System DSN, go to Control Panel-> Administrative Tools and then double-click Data Sources (ODBC) . Then, go to System DNS tab and click Add. Select SQL Server, enter the name of your server and database and specify a user/password. You would better enter a SQL SERVER login (do not choose Windows Authentication) and specify a password that will not be changed.

After doing so, change the default database of this DSN to the database that logging table resides on. Click OK and close the dialog box.

Now go to Control Panel -> Administrative Tools  and double-click on Internet Information Services Manager.

Open Web Sites, and then find your Virtual Directory or Web Site. Right-click on it, choose properties and under the Home Directory tab, find Enable Logging check box. Enable the check box and In the Active log format list box, select ODBC. Afterwards, click on Configure button to view the ODBC settings. In this dialog box, enter the name of ODBC DSN you created in the previous step, name of the table (inetlog), user name and password of logging data base.

For example:

ODBC: WordPressLogDSN

Table: InetLog

Login: sa

Password: 123

In the above sample I used “SA” SQL SERVER’s default login name. This is a risky job and I do not recommend you do it! You would better use a user name with no administrative permission which only has access to logging table or database.

From now on, any visit to a page will be recorded!

NOTICE: The logging table will be filled with tons of records, therefore, I strongly recommend you to store InetLog table on a seperate FileGroup. Also, You would better to schedule a script that removes unnecessary records from inetlog table.

Writing The Code

In the rest of the post I am going to write a sample page that displays how many times each .aspx or .html page has been visited. I will show the stats in numeric and bar chart format.

As I mentioned, data about any referred resources will be stored in InetLog table. Thus, we should be able to determine if a record belongs to a Page or not. Only pages whose ServiceStatus is equal to 200 have been loaded or access with no error. So, I need a Sql function that determines if a resource is a page (i.e. .aspx) and if it has been accessed successfully.

I will call the function IsPage. The code is very simple so I just bring the script body here:

create function IsPage(   @PageName varchar(200) )
returns int
begin
Declare @len int
set @len = len(@PageName)

if Substring(@PageName, @Len, 1) = ‘/’
set @PageName = Substring(@PageName,1,@Len-1)

Declare @Extension  Varchar(4)
Set @Extension = Upper(Substring(@PageName,@Len – 3 , 4))

if @Extension = ‘ASPX’ or @Extension = ‘HTML’
return 1
return  0
end

I will use this function to create my stats and also to remove records of non-page resources like .js files (if I don’t need them).

At the next step I will write a stored procedure that determines how many times each page has been visited. To let the script run on older versions of Sql Server, I have written it as simple as possible:

create procedure rep_LogVisit
@BeginDate smalldatetime,
@EndDate smalldatetime
as
select Count(1) as FileCount, target as PageName
from inetlog
where ServiceStatus = 200 and dbo.isPage(target) =1
and LogTime>=@BeginDate and LogTime<=@EndDate
group by dbo.GetFileName(target)
go

As the above code indicates, rep_LogVisit counts the records who’s ServiceStatus equals to 200 (successful) , the resource is a page and the log has been recorded between a given date.

In order to display the result, I use a GridView control, having two columns. One for the page name, and one for the visit count:

<asp:GridView ID=”GridView1″ runat=”server” AutoGenerateColumns=”False”

CellPadding=”4″ ForeColor=”#333333″

GridLines=”None”>

<Columns>

<asp:BoundField DataField=”target” HeaderText=”Page Name” />

<asp:TemplateField HeaderText=”Visit Count”>

<ItemTemplate>

<asp:Panel runat=”server” ID=”divBar” style=”background-color:Red” Width='<%#GetLength((int)Eval(“TotalCount”)) %>’>

<asp:Label runat=”server” ID=”lblCount” text='<%#Eval(“TotalCount”) %>’ ForeColor=”White”></asp:Label>

</asp:Panel>

</ItemTemplate>

</asp:TemplateField>

</Columns>

</asp:GridView>

Well, I will call  rep_LogVisit stored procedure, bind the grid to it’s result. I also have to write a function called GetLength in order to calculate the lenth of each bar:

int MaxCount = 0;

protected void Page_Load(object sender, EventArgs e)

{

if (!IsPostBack)

{

SqlConnection conn = new SqlConnection(“Your Connection String Here”);

SqlCommand cmd = conn.CreateCommand();

cmd.CommandType = CommandType.StoredProcedure;

cmd.CommandText = “rep_LogVIsit”;

// Add date parameters here. skipped to simplify the code.

DataTable table = new DataTable();

SqlDataAdapter adapter = new SqlDataAdapter(cmd);

adapter.Fill(table);

// Get the bigger visit count

MaxCount = (int)(from C in table.AsEnumerable()

select C).Max(c=>c[“TotalCount”]);

GridView1.DataSource = table;

GridView1.DataBind();

}

}

// Calculate the length of each bar

internal int GetLength(int TotalCount)

{

if (MaxCount == 0)

return 0;

else

return Convert.ToInt32(Math.Round(((double)TotalCount / MaxCount)*300));

}

At run-time, you will see something like the image bellow:

untitled1