Building high-performance ASP.NET applications


If you are building public facing web sites one of the things you want to achieve at the end of the project is a good performance under the load for your web site. That means, you have to make sure that your product works under a heavy load (e.g. 50 concurrent users, or 200 users per second etc.) even though at the moment you don’t think you would have that much load. Chances are that the web site attracts more and more users over time and then if it’s not a load tolerant web site it will start flaking, leaving you with an unhappy customer and ruined reputation.

There are many articles on the Internet about improving the performance of ASP.NET web sites, which all make sense; however, I think there are some more things you can do to save yourself from facing massive dramas. So what steps can be taken to produce a high-performance ASP.NET or ASP.NET MVC application?

  • Load test your application from early stages

Majority of developers tend to leave performing the load test (if they ever do it) to when the application is developed and has passed the integration and regression tests. Even though performing a load test at the end of the development process is better than not doing it at all, it might be way too late to fix the performance issues once your code has been already been written. A very common example of this issue is that when the application does not respond properly under load, scaling out (adding more servers) is considered. Sometimes this is not possible simply because the code is not suitable for achieving it. Like when the objects that are stored in Session are not serializable, and so adding more web nodes or more worker processes are impossible. If you find out that your application may require to be deployed on more than one server at the early stages of development, you will do your tests in an environment which is close to your final environment in terms of configuration and number of servers etc., then your code will be adapted a lot easier.

  • Use the high-performance libraries

Recently I was diagnosing the performance issues of a web site and I came across a hot spot in the code where JSON messages coming from a third-party web service had to be serialized several times. Those JSON messages were de-serialized by Newtonsoft.Json and tuned out that Newtonsoft.Json was not the fastest library when it came to de-serialization. Then we replaced Json.Net with a faster library (e.g. ServiceStack) and got a much better result.

Again if the load test was done at an early stage when we picked Json.Net as our serialization library we would have find that performance issue a lot sooner and would not have to make so many changes in the code, and would not have to re-test it entirely again.

  • Is your application CPU-intensive or IO-intensive?

Before you start implementing your web site and when the project is designed, one thing you should think about is whether your site is a CPU-intensive or IO-intensive? This is important to know your strategy of scaling your product.

For example if your application is CPU-intensive you may want to use a synchronous pattern, parallel processing and so forth whereas for a product that has many IO-bound operations such as communicating with external web services or network resources (e.g. a database) Task-based asynchronous pattern might be more helpful to scale out your product. Plus you may want to have a centralized caching system in place which will let you create Web Gardens and Web Farms in future, thus spanning the load across multiple worker processes or serves.

  • Use Task-based Asynchronous Model, but with care!

If your product relies many IO-bound operations, or includes long-running operations which may make the expensive IIS threads wait for an operation to complete, you better think of using the Task-based Asynchronous Pattern for your ASP.NET MVC project.

There are many tutorials on the Internet about asynchronous ASP.NET MVC actions (like this one) so in this blog post I refrain from explaining it. However, I just have to point out that traditional synchronous Actions in an ASP.NET (MVC) site keep the IIS threads busy until your operation is done or the request is processed. This means that if the site is waiting for an external resource (e.g. web service) to respond, the thread will be busy. The number of threads in .NET’s thread pool that can be used to process the requests are limited too, therefore, it’s important to release the threads as soon as possible. A task-based asynchronous action or method releases the thread until the request is processed, then grabs a new thread from the thread pool and uses it to return the result of the action. This way, many requests can be processed by few threads, which will lead to better responsiveness for your application.

Although task-based asynchronous pattern can be very handy for the right applications, it must be used with care. There are a few of concerns that you must have when you design or implement a project based on Task-based Asynchronous Pattern (TAP). You can see many of them in here, however, the biggest challenge that developers may face when using async and await keywords is to know that in this context they have to deal with threads slightly differently. For example, you can create a method that returns a Task (e.g. Task<Product>). Normally you can call .Run() method on that task or you can merely call task.Result to force running the task and then fetching the result. In a method or action which is built based on TBP, any of those calls will block your running thread, and will make your program sluggish or even may cause dead-locks.

  • Distribute caching and session state

    It’s very common that developers build a web application on a single development machine and assume that the product will be running on a single server too, whereas it’s not usually the case for big public facing web sites. They often get deployed to more than one server which are behind a load balancer. Even though you can still deploy a web site with In-Proc caching on multiple servers using sticky session (where the load balancer directs all requests that belong to the same session to a single server), you may have to keep multiple copies of session data and cached data. For example if you deploy your product on a web farm made of four servers and you keep the session data in-proc, when a request comes through the chance of hitting a server that already contains a cached data is 1 in 4 or 25%, whereas if you use a centralized caching mechanism in place, the chance of finding a cached item for every request if 100%. This is crucial for web sites that heavily rely on cached data.

    Another advantage of having a centralized caching mechanism (using something like App Fabric or Redis) is the ability to implement a proactive caching system around the actual product. A proactive caching mechanism may be used to pre-load the most popular items into the cache before they are even requested by a client. This may help with massively improving the performance of a big data driven application, if you manage to keep the cache synchronized with the actual data source.

  • Create Web Gardens

As it was mentioned before, in an IO-bound web application that involves quite a few long-running operations (e.g. web service calls) you may want to free up your main thread as much as possible. By default every web site is run under one main thread which is responsible to keep your web site alive, and unfortunately when it’s too busy, your site becomes unresponsive. There is one way of adding more “main threads” to your application which is achievable by adding more worker processes to your site under IIS. Each worker process will include a separate main thread therefore if one is busy there will be another one to process the upcoming processes.

Having more than one worker process will turn your site to a Web Garden, which requires your Session and Application data be persisted out-proc (e.g. on a state server or Sql Server).

  • Use caching and lazy loading in a smart way

    There is no need to emphasize that if you cache a commonly accessed bit of data in memory you will be able to reduce the database and web service calls. This will specifically help with IO-bound applications that as I said before, may cause a lot of grief when the site is under load.

    Another approach for improving the responsiveness of your site is using Lazy Loading. Lazy Loading means that an application does not have a certain piece of data, but it knows that where is that data. For example if there is a drop-down control on your web page which is meant to display list of products, you don’t have to load all products from the database once the page is loaded. You can add a jQuery function to your page which can populate the drop-down list the first time it’s pulled down. You can also apply the same technique in many places in your code, such as when you work with Linq queries and CLR collections.

  • Do not put C# code in your MVC views

    Your ASP.NET MVC views get compiled at run time and not at compile time. Therefore if you include too much C# code in them, your code will not be compiled and placed in DLL files. Not only this will damage the testability of your software but also it will make your site slower because every view will take longer to get display (because they must be compiled). Another down side of adding code to the views is that they cannot be run asynchronously and so if you decide to build your site based on Task-based Asynchronous Pattern (TAP), you won’t be able to take advantage of asynchronous methods and actions in the views.

    For example if there is a method like this in your code:

    public async Task<string> GetName(int code)

    {

    var result = …

    return await result;

    }

This method can be run asynchronously in the context of an asynchronous ASP.NET MVC action like this:

    public Task<ActionResult> Index(CancellationToken ctx)

{

    var name = await GetName(100);

}

But if you call this method in a view, because the view is not asynchronous you will have to run it in a thread-blocking way like this:

var name = GetName(100).Result;

.Result will block the running thread until GetName() processes our request and so the execution of the app will halt for a while, whereas when this code is called using await keyword the thread is not blocked.

  • Use Fire & Forget when applicable

If two or more operations are not forming a single transaction you probably do not have to run them sequentially. For example if users can sign-up and create an account in your web site, and once they register you save their details in the database and then you send them an email, you don’t have to wait for the email to be sent to finalize the operation.

In such a case the best way of doing so is probably starting a new thread and making it send the email to the user and just get back to the main thread. This is called a fire and forgets mechanism which can improve the responsiveness of an application.

  • Build for x64 CPU

32-bit applications are limited to a lower amount of memory and have access to fewer calculation features/instructions of the CPU. To overcome these limitations, if your server is a 64-bit one, make sure your site is running under 64-bit mode (by making sure the option for running a site under 32-bit mode in IIS is not enabled). Then compile and build your code for x64 CPU rather than Any CPU.

One example of x64 being helpful is that to improve the responsiveness and performance of a data-driven application, having a good caching mechanism in place is a must. In-proc caching is a memory consuming option because everything is stored in the memory boundaries of the site’s application pool. For a x86 process, the amount of memory that can be allocated is limited to 4 GB and so if loads of data be added to the cache, soon this limit will be met. If the same site is built explicitly for a x64 CPU, this memory limit will be removed and so more items can be added to the cache thus less communication with the database which leads to a better performance.

  • Use monitoring and diagnostic tools on the server

    There might be many performance issues that you never see them by naked eyes because they never appear in error logs. Identifying performance issues are even more daunting when the application is already on the production servers where you have almost no chance of debugging.

    To find out the slow processes, thread blocks, hangs, and errors and so forth it’s highly recommended to install a monitoring and/or diagnostic tool on the server and get them to track and monitor your application constantly. I personally have used NewRelic (which is a SAS) to check the health of our online sites. See HERE for more details and for creating your free account.

  • Profile your running application

    Once you finish the development of your site, deploy it to IIS, and then attach a profiler (e.g. Visual Studio Profiler) and take snapshots of various parts of the application. For example take a snapshot of purchase operation or user sign-up operation etc. Then check and see if there is any slow or blocking code there. Finding those hot spots at early stages might save you a great amount of time, reputation and money.

Advertisements

The magic behind the Google Search


Have you ever thought that how does Google perform a fast search on a wide variety of file types? For example, how Google is able to suggest you a list of search expressions while you are typing in your keywords?

Another example is Google image search: You upload an image and Google finds the similar photos for you in no time.

The key to this magic is SimHash. SimHash is a mechanism/algorithm invented by Charikar (see the full patent). SimHash comes from the combination of Similarity and Hash. This means that instead of comparing the objects with each other to find the similarity, we convert them to an N-bit number that represents the object (known as Hash) and compare them. In the other words, if instead of the object, we maintain a number that represents the object, so that we will be able to compare those numbers to find the similarity of the two objects.

The basics of SimHash are as below:

  1. Convert the object to a hash value. (From my experience, this is better to be an unsigned integer number).
  2. Count the number of matching bits. For example, are the bit 1 in two hash values the same? Are the 2nd bits the same?
  3. Depending on the size of the hash value (number of bits), you will have a number between 0 and N, where N is the length of the hash value. This is called the Hamming Distance. Hamming distance was introduced by Richard Hamming in 1950 (see here).
  4. The number you have achieved must be normalized and finally be represented in a value such as percentage. To do so, we can use a simple formula such as Similarity = (HashSize-HamminDistance)/HashSize.

Since the hash value can be used to represent any kind of data, such as a text or an image file, it can be used to perform a fast search on almost any file type.

To calculate the hash value, we have to decide the hash size, which normally is 32 or 64. As I said before, an unsigned value works better. We also need to choose a chunk size. The chunk size will be used to break the data down to small pieces, called shingles. For example, if we decide to convert a string such as “Hello World” to a hash value, If the chunk size is 3, our chunks would be:

1-Hel

2-ell

3-llo

Etc.

To convert a binary data to a hash value, you will have to break it down to chunks of bits. E.g. you have to pick every K bits. Google says that N=64 and K=3 are recommended.

To calculate the hash value in a SimHash manner, we have to take the following steps:

  1. Tokenize the data. To tokenize the data, we will have to break it down to small chunks as mentioned above and store the chunks in an array.
  2. Create an array (called a vector) of size N, where N is the size of the hash (let’s call this array V).
  3. Loop over the array of tokens (assume that i is the index of each token),
  4. Loop over the bits of each token (assume that j is the index of each bit),
  5. If Bit[j] of Token[i] is 1, then add 1 to V[j] otherwise subsidize 1 from V[j]
  6. Assume that the fingerprint is an unsigned value (32 or 64 bit) and is named F.
  7. Once the loops finish, go through the array V, and if V[i] is greater than 0, set bit i in F to 1 otherwise to 0.
  8. Return F as the fingerprint.

Here is the code:

private int DoCalculateSimHash(string input)

{

ITokeniser tokeniser = new Tokeniser();

var hashedtokens = DoHashTokens(tokeniser.Tokenise(input));

var vector = new int[HashSize];

for (var i = 0; i < HashSize; i++)

{

vector[i] = 0;

}

foreach (var value in hashedtokens)

{

for (var j = 0; j < HashSize; j++)

{

if (IsBitSet(value, j))

{

vector[j] += 1;

}

else

{

vector[j] -= 1;

}

}

}

var fingerprint = 0;

for (var i = 0; i < HashSize; i++)

{

if (vector[i] > 0)

{

fingerprint += 1 << i;

}

}

return fingerprint;

}

And the code to calculate the hamming distance is as below:

private static int GetHammingDistance(int firstValue, int secondValue)

{

var hammingBits = firstValue ^ secondValue;

var hammingValue = 0;

for (int i = 0; i < 32; i++)

{

if (IsBitSet(hammingBits, i))

{

hammingValue += 1;

}

}

return hammingValue;

}

You may use different ways to tokenize a given data. For example you may break a string value down to words, or to n-letter words, or to n-letter overlapping pieces. From my experience, if we assume that N is the size of each chunk and M is the number of overlapping characters, N=4 and M=3 are the best choices.

You may download the full source code of SimHash from SimHash.CodePlex.com. Bear in mind that SimHash is patented by Google!

What is a Software Architecture Document and how would you build it?


Howdy!

After working as a senior designer and/or a software architect in three sub-continents, I came across to a kind of phenomenon in Australia! I call it a phenomenon because first of all, terms such as ‘solution architect’, ‘software architect’ and/or ‘enterprise architect’ are used interchangeably and sometimes incorrectly. Second, architecture is often ignored and contractors or consultants usually start doing the detailed design as soon as they receive a requirements document.

This leaves the client (the owner of the project) with a whole bunch of documents which are not understandable to them so that they have to hand them over to a development team without even knowing if the design is what they really wanted.

This happens because such a crucial role is assigned to a senior developer or a designer who thinks purely technical whilst an Architect must be able to look at the problem from different aspects (This happens because in Australia titles are given away for free, just ask for one!).

What is Architecture?

Architecture is the fundamental organization of a system embodied in its components, their relationships to each other, and to the environment, and the principles guiding its design and evolution (IEEE 1471).

The definition suggested by IEEE (above) refers to a solution architect and/or software architect. However, as Microsoft suggests there are other kinds of architects such as a Business Strategy Architect.

There are basically six types of Architects:

·        Business Strategy Architect

The purpose of this role is to change business focus and define the enterprise’s to-be status. This role, he says, is about the long view and about forecasting.

·        Business Architect

The mission of business architects is to improve the functionality of the business. Their job isn’t to architect software but to architect the business itself and the way it is run.

·        Solution Architect

Solution architect is a relatively new term, and it should refer also to an equally new concept. Sometimes, however, it doesn’t; it tends to be used as a synonym for application architect.

·        Software Architect

Software architecture is about architecting software meant to support, automate, or even totally change the business and the business architecture.

·        Infrastructure Architect

The technical infrastructure exists for deployment of the solutions of the solution architect, which means that the solution architect and the technical infrastructure architect should work together to ensure safe and productive deployment and operation of the system

·        Enterprise Architect

Enterprise Architecture is the practice of applying a comprehensive and rigorous method for describing a current and/or future structure and behaviour for an organization’s processes, information systems, personnel and organizational subunits, so that they align with the organization’s core goals and strategic direction. Although often associated strictly with information technology, it relates more broadly to the practice of business optimization in that it addresses business architecture, performance management, and process architecture as well (Wikipedia).

Solution Architect

As we are techies let’s focus on Solution Architect role:

It tends to be used as a synonym for application architect. In an application-centric world, each application is created to solve a specific business problem, or a specific set of business problems. All parts of the application are tightly knit together, each integral part being owned by the application. An application architect designs the structure of the entire application, a job that’s rather technical in nature. Typically, the application architect doesn’t create, or even help create, the application requirements; other people, often called business analysts, less technically and more business-oriented than the typical application architect, do that.

So if you are asked to get on board and architecture a system based on a whole bunch of requirements, you are very likely to be asked to do solution architecture.

How to do that?

A while back a person who does not have a technical background, but he has money so he is the boss, was lecturing that in an ideal world no team member has to talk to other team members. At that time I was thinking that in my ideal world, which is very close to the Agile world, everybody can (or should) speak to everybody else. This points out that how you architecture a system is strongly tight to your methodology. It does not really make a big difference that which methodology you follow as long as you stick to the correct concepts. Likewise, he was saying that the Software Architecture Document is part of the BRD (Business Requirement Document) as if it was technical a business person (e.g. the stake holders) would not understand it. And I was thinking to me that: mate! There are different views being analyzed in a SAD. Some of them are technical, some of them are not.

 What the above story points out to me is that solution architecture is the art of mapping the business stuff to technical stuff, or in the other words, it’s actually speaking about technical things in a language which is understandable to business people.

A very good way to do this is to putting yourself in the stakeholders’ shoes. There are several types of stakeholders in each project who have their own views and their own concerns. This is the biggest difference between the design and the architecture. A designer thinks very technically while an architect can think broadly and can look at a problem from different views. Designers usually make a huge mistake, which happens a lot in Australia: They put everything in one document. Where I am doing a solution architecture job now, I was given a 21-mega-byte MS Word document which included everything, from requirements to detailed class and database design. Such a document is very unlikely to be understandable by the stakeholders and very hard to use by developers. I reckon that this happens because firstly designers don’t consider the separation of stake holders and developers concerns. Second, because it’s easier to write down everything in a document. But I have to say that this is wrong as SAD and design document (e.g. TSD) are built for different purposes and for different audiences (and in different phases if you are following a phase-based methodology such as RUP). If you put everything in a document, it’s like you are cooking dinner and you put the ingredients along with the utensils in a pot and boil them!!

A very good approach for looking at the problem from the stakeholder’s point of view is the 4+1 approach. At this model, scenarios (or Use Cases) are the base and we look at them from a logical view (what are the building blocks of the system), Process view (processes such as asynchronous operations), Development (aka Implementation) view and Physical (aka Deployment) view. There are also optional views such as Data View that you can use if you need to. Some of the views are technical and some of them are not, however they must match and there must be a consistency in the architecture so that technical views can cover business views (e.g. demonstration of a business process with a UML Activity Diagram and/or State Diagram).

I believe that each software project is like a spectrum that each stakeholder sees a limited part of it. The role of an architect is to see the entire spectrum. A good approach to do so (that I use a lot) is to include a business vision (this might not be a good term) in your SAD. It can be a billeted list, a diagram or both, which shows what the application looks like from a business perspective. Label each part of the business vision with a letter or a number. Then add an architectural overview and then map it to the items of business vision indicating that which part of the architecture is meant to address which part of the business vision.

In a nutshell, Architecture is early design decisions, it is not the design.

What to put in an SAD?

There are a whole bunch of SAD templates on the internet, such as the template offered by RUP. However the following items seem to be necessary for each architecture document:

  • Introduction. This can include Purpose, Glossary, Background of the project, Assumptions, References etc. I personally suggest that you explain that what kind of methodology you are following? This will avoid lots of debates, I promise!

It is very important to clear the scope of the document. Without a clear scope not only you will never know that when you are finished, you won’t be able to convince the stakeholder that the architecture is comprehensive enough and addresses all their needs.

  • Architectural goals and constraints: This can include the goals, as well as your business and architectural visions. Also explain the constraints (e.g. if the business has decided to develop the software system with Microsoft .NET, it is a constraint). I would suggest that you mention the components (or modules) of the system when you mention your architectural vision. For example say that it will include Identity Management, Reporting etc. And explain what your strategy to address them is. As this section is intended to help the business people to understand your architecture, try to include clear and well-organised diagrams.

A very important item that you want to mention is the architectural principles that you are following. This is even more important when the client organization maintains a set of architectural principles.

    • Quality of service requirements: Quality of service requirements address the quality attributes of the system, such as performance, scalability, security etc. These items must not be mentioned in a technical language and must not contain any details (e.g. the use of Microsoft Enterprise Library 5).
    • Use Case View: Views basically come from 4+1 model so if you follow a different model you might not have it. However, it is very important that you detect key scenarios (or Use Cases) and mention them in a high-level. Again, diagrams, such as Use Case Diagram, help.
    • Logical View: Logical view demonstrates the logical decomposition of the system, such as packages the build it. It will help the business people and the designers to understand the system better.
    • Process View: Use activity diagrams as well as state diagrams (if necessary) to explain the key processes of the system (e.g. the process of approving a leave request).
    • Deployment View: Deployment view demonstrates that how the system will work in a real production environment. I suggest that you put 2 types of diagrams: one (normal) human understandable diagram, such a Visio Diagram that shows the network, firewall, application server, database, etc.  Also a UML deployment diagram that demonstrates the nodes and dependencies. This will again helps the business and technical people have same understanding of the physical structure of the system.
    • Implementation View: This part is the most interesting section of the techies. I like to include the implementation options (e.g. Java and .NET) and provide a list of pros and cons for each of them. Again, technical pros and cons don’t make much sense to business people. They are mostly interested in Cost of Ownership and availability of the resources and so on.  If you suggest a technology or if it has already been selected, list the products and services that are needed on a production environment (e.g. IIS 7, SQL Server 2008).  Also it’ll be good to include a very high-level diagram of the system.

Also I like to explain the architectural patterns that I’m going to use. If you are including this section in the Implementation View, explain them enough so that a business person can quite understand what that pattern is for. For instance if you are using Lazy Loading patter, explain that what problem does it solve and why you are using it.

Needless to say that you have to also decide which kind of Architecture style you are suggesting, such as 3-Tier and N-Tier, Client-Server etc. Once you have declared that, explain the components of the system (Layers, Tiers and their relationships) by diagrams.

This part also must include your implementation strategy for addressing the Quality of Service Requirements, such as how will you address scaling out.

  • Data View: If the application is data centric, explain the overall solution of data management (never put a database design in this part), your backup and restore strategy as well as disaster recovery strategy.

Be iterative

It is suggested that the architecture (and in result the Software Architecture Document) be developed through two or more iterations. It’s impossible to build a comprehensive architecture document in one iteration as not only Architecture has an impact on the requirements, but also architecture  begins in an early stage and many of the scenarios are likely to change.

How to prove that?

Now that after doing lots of endeavor you have prepared your SAD, how will you prove it to the stakeholders? I assume that many of business people do not have any idea about the content and structure of an SAD and the amount of information that you must include in it.

A good approach is to prepare a presentation about the mission of the system, scope, goals, visions and your approach. Invite the stakeholders to a meeting and present the architecture to them and explain that how the architecture covers their business needs. If they are not satisfied, your architecture is very likely to be incomplete.

References

Pluggable modules for ASP.NET


When you design a modular ASP.NET application, soon or later you will need to think about adding extensibility features to your project so that it will be possible to add new modules at runtime.  There are a few architectures and designs that let you develop an extensible application, like ASP.NET MVP. However, many of them add a lot of complexity almost to everything and one should learn many concepts to use them.  Therefore, it’s a good idea to use other simple but innovative methods like the one I will explain bellow.

The method I am going to mention lets you develop an ASP.NET application and add some more modules to it later at runtime. In a nutshell, it has the following benefits:

  1. Allows adding new pages to an existing web application at runtime and does not need any recompilation.
  2. Allows adding new web parts to an existing content management system (Portal) at run-time.
  3. Several developers can develop different parts of an application.
  4. It is very easy to understand, develop and use.
  5. Does not exploit any 3rd party library so nothing is needed except Visual Studio.

And it has the following drawbacks:

  1. One should know the exact structure of an existing ASP.NET application, like folder hierarchies.
  2. May not cover all possible scenarios (Actually I have not taught about many scenarios).

How to implement it?

The design I am going to explain is possible only if you develop an ASP.NET Web Project rather than an ASP.NET web site.  As far as I remember, visual studio 2005 does not let you create a web project. If I am right, you need to use Visual Studio 2008. However, there are two main parts that we need to develop:

  • A web application project that includes main modules, main pages, and loads plugged modules, checks licensing, perform security tasks etc.
  • Plugged modules, which will add more pages, web parts and functionalities.

Main application and modules must match. It means that they must have same structure (i.e. folders), use same master pages and follow same rules.

The main reason that I used a Web Application Project, rather than a Web Site, was the benefits of a Web Application Project for developing a plug-in based web site. After building a web application project, there will be one assembly and several .aspx, .ascx, .ashx … files. After the web application is published, there is possibility to add more pages and files to it. Therefore, if at a later time we add several .aspx pages along with their .dll files, the web application will be able to work with those pages with no problem.

When developing the main application, you should consider a well formed directory structure, language specific contents, master pages etc. For example, your application should have a master page with a general name, like Site.Master. It also needs to maintain each module’s pages in a separate folder so that new modules can follow the same rule and avoid naming conflicts etc.

To develop the main application, follow the steps bellow:

  1. Create an empty solution in VS 2008.
  2. Add a new ASP.NET Web Project (not a web site) to the solution.
  3. Add any required folders like App_Themes and implement any required authentication, authorization and personalization mechanisms. Your web application must be complete and working.
  4. Add a master page to the web application project and name it Site.Master or another general name.
  5. Add a new Class Library Project and call it Framework (i.e. mycompany.myproject.Framework), common or whatever name that indicates this class library will be shared between the main application and dynamic modules.
  6. Add a new interface to the mentioned class library and call it IModuleInfo. This interface will be implemented with a class inside any pluggable module and will return root menu items that must be added to main application’s menu (or items to be added to a site navigation). It also can return a list of WebParts that introduces web parts that exist inside the module.

public interface IModuleInfo

{

List<MenuItem> GetRootMenuItems(string[] UserRoles);

}

UserRoles parameter is not mandatory. It shows that you can pass parameters to the method that returns a module’s main menu items. In this example, it indicates which Roles the current user has so that menu items will be filtered properly.

  1. Add a new ASP.NET Web Application project to the solution and name it SampleModule.
  2. Add a folder called SampleModule and if necessary, add more sub-folders.
  3. Add a web.config file to SampleModule folder and define which users/roles can access which folder.
  4. Add a master page named Site.Master. In fact , it must have same name with your master page in the main application.
  5. Add a public class with any name (I call it ModulePresenter) that implements IModuleInfo (this interface was added to Common or Framework library).

ModulePresnter class will return a list me menu items to main application. Main application will add those menu items as root items to its main menu afterwards. I will not bring a detailed code for the part that a module creates these items; it is dependent on your project.

public class ModulePresenter : IModuleInfo

{

#region IModuleInfo Members

public List<System.Web.UI.WebControls.MenuItem> GetRootMenuItems(string[] UserRoles)

{

List<MenuItem> items = new List<MenuItem>();

//:

//:

return items;

}

#endregion

}

  1. Compile this application and go back to the main application.
  2. Add an XML file and call it PluggedModules.xml. This file maintains the qualified type name of each module that must be loaded. A qualified type name includes assembly, namespace and class name

<?xml version=”1.0″ encoding=”utf-8″ ?>

<modules>

<module name=”SampleModule” type=” SampleModule.ModulePresenter, SampleModule.dll”></module>

</modules>

  1. Write a code to query PluggbedModules.xml, get menu items and attach them to main menu:

public static void LoadModules(Menu menuControl , string[] userRoles, string xmlName)

{

XDocument document = XDocument.Load(HttpContext.Current.Server.MapPath(string.Format(“~/{0}” , xmlName)));

var allModules = document.Elements(“modules”);

foreach(XElement module in allModules.Elements())

{

string type = module.Attribute(“type”).Value;

IModuleInfo moduleInfo = Activator.CreateInstance(Type.GetType(type)) as IModuleInfo;

List<MenuItem> allItems = moduleInfo.GetRootMenuItems(userRoles);

foreach(MenuItem item in allItems)

{

menuControl.Items.Add(item);

}

}

}

As seen in the above code, we query PluggedModule.xml file , extract introduced files and create an instance of it using Activator.CreateInstance method. Then extract IModuleInfo implementation, call GetRootMenuItems to get module’s menu items and add it to main menu.

After doing all the above steps, copy modules .dll file (generated after you build the project) to main application’s \bin folder and add it’s main folder (SampleModule) to main application’s root folder. It will work fine until all naming matches (for example both use master pages with a same name) and when specifying target URL in menu items, they point to a relative path, i.e. SampleModule/MyPage.aspx.

Please download the sample code from here.

A 3-Tier Architecture with LINQ TO SQL


Recently, I posted a 5-part article about developing a 3-tier architecture using ADO.NET. In this post, I am going to mention how to develop such an architecture using Linq to Sql. Since the last article was too long, I am going to make this one short. Therefore, the architecture is not changed and still includes four layers: Common, Data Access, Business and Presentation. Moreover, since the business and presentation layers do not change that much, I am going to focus on the development of Common and DAL layers.

The nature of Linq to Sql seems to be designed for 2-tier programs, especially when we use Visual Studio to visually create entities and DataContexts. However, we can separate the definition of entities and the data access layer following the methods mentioned below.

First of all, inside Visual Studio 2008, create a new blank solution and then add a Class Library project to it, named Linq3TierCommon. Right-click on Linq3TierCommon, choose “Add new item” and then add a new “Linq to Sql classes” item to your project. Then design your entities visually or even drag-drop any table, view or stored procedure you like on to this .dbml surface (if the surface is not seeing, in Solution Explorer, righ-click on the .dbml file and choose “View Designer” ). The following image is an example:

Actually, you do not have to use .DBML (Linq to Sql classes item) and you can define your required entities by writing code or using SqlMetal tool. However, it is needless to say that using Visual Studio makes everything much easier.

Anyway, up to now we have created entities and the relationships, but the problem is that an automatically generated DataContext will let you access to database thorough the common layer. To avoid this, set the “Access” property of your DataContext class to “Internal”:

After completing the common layer, add a new class library to the solution and name it Linq3TierDAL. To shorten this post I have put the base DAL class and the concrete DAL classes in one class library, but for a real application, I strongly recommend to put them in different assemblies. However, add a class to this library and name it Linq3TierDALBase.cs

This class is the base DAL class and contains common DAL functionalities. First, we define a protecred property of type DataContext for CRUD actions:

private DataContext _innerDataContext = null;
protected DataContext innerDataContext
{
get
{
string ConnectionString = ConfigurationManager.ConnectionStrings[“main”].ToString();
if (_innerDataContext == null)
_innerDataContext = new DataContext(ConnectionString);
return _innerDataContext;
}
}

Then, we will add a generic method named GetTables in order to return a Table<T> collection. This collection will be used by concrete data access classes to perform Linq query on them.

protected Table<T> GetTables<T>() where T:class
{
return innerDataContext.GetTable<T>();
}

You may also add a method to call the ExecuteCommand method of DataContext class in order to execute stored procedures.

we also add a method named Save to save all entity changes:

public virtual void Save()
{
innerDataContext.SubmitChanges();
}

Now we will add a new class, named UserDAL.cs which is inherited from Linq3TierDALBase.cs. This class may have several methods like, FetchAll, FetchByPK, Save and so on:

Notice that these methods are not constant and you may define as many as methods you like.

Afterwards, add a new class library project to this solution and call it Linq3TierBusiness, containing a class named UserBiz.cs . This class may have several business methods, like FetchAll, Save and so on. UserBiz is responsible for validations, concurrency control and so on. The following code is a very simple example of what a business class can be:

public class USERBiz
{
private USERDAL innerDAL = new USERDAL();
public IQueryable<USER> FetchAll()
{
return innerDAL.FetchAll();
}
public void Save()
{

// perform validations…
innerDAL.Save();
}
}

PLEASE NOTICE that you must call .SAVE method of a DAL object from which you have retrieved your entities. Otherwise, LINQ to Sql manager assumes that you have made no change and it will save nothing. The good thing about linq is that you may ever use Linq To Objects inside your business classes, withough interfering the data access layer.

For the presentation layer, I have written a small Console application, which retrieves all users, find a user whose login name is “admin” and changes it’s password:

class Program
{
static void Main(string[] args)
{
USERBiz biz = new USERBiz();
IQueryable<USER> Users = biz.FetchAll();

USER admin = Users.Single(x => x.LOGINNAME == “admin”);

if (admin.PASSWORD != “newpass”)
{
admin.PASSWORD = “newpass”;
biz.Save();
}

foreach (USER item in Users)
Console.WriteLine(“Login Name: {0} , Password={1}”, item.LOGINNAME, item.PASSWORD);

Console.ReadKey();
}
}

That’s it. Please notice that this so-called architecture needs to be improved a lot and what is mentioned here is just to give an idea.

You can download a full source code from here. The source code contains a SQL 2005 database backup, used to write the sample program of this post. you may use it or use your own database.

P.S. The password of the attached sample file is aspguy.wordpress.com

Implementing a 3-Tier architecture with C# – Part 5


Hi back,

In this post we will finalize the implementation of our 3-tier asp.net architecture, by developing a Web-based presentation layer.

So far, we have developed the data access and business layers. The implementation of UI, consists of methods and approaches for handling business and crud operations thorough a Web UI. In fact, there are several ways to present business objects and reflect the changes back to data base. Like:

  1. Binding controls (i.e. GridView) directly to DataTables coming from business classes (i.e. by FetchAll method)
  2. Binding controls (.e. GridView) to business classes using an IDataSource, like ObjectDataSouce control.
  3. Mixing up the above methods.

If you opt the first option, you need to handle sorting and paging manually. It means that you have to write code for PageIndexChanged and Sorted methods. This might take several minutes for you to handle every details and make your development process too long and not productive.

A better approach is using ObjectDataSource. For using it, we have to prepare our business classes first. ObjectDataSoruce works with classes that DataObject attribute has been applied to them. Thus, open your business class (i.e. PersonBiz.cs), and apply DataObject attribute on it:

[DataObject]

public class PersonBiz
{ …

Then we might specify four methods for doing Select/Insert/Update/Delete operations. However, I personally prefer to use Select method only and do the other operations manually. My reason is that, if I declare a method for Insert/Update methods, I will have to use GridView’s (or similar controls) editing features and have to pass a lot of params to the Insert/Update methods. and I do not like it! 🙂

Anyway, to mark a method as a Select method, we have to apply DataObjectMethod attribute on it. This attribute takes a parameter indicating which kind of method is it. For example, a select method can be declared like this:
[DataObjectMethod(DataObjectMethodType.Select)]
public DataTable FetchAll(…

If you are intended to use ObjectDataSource, your Select method must have a return value of type IENumerable, and preferably, it would better return a DataTable or DataView.Thus, FetchAll method returns a DataTable. It also has two parameters as follows:

[DataObjectMethod(DataObjectMethodType.Select)]
public DataTable FetchAll(int startRowIndex, int maximumRows)
{
PersonEntity entity = new PersonEntity();
innerDAL.Fill(entity, startRowIndex, maximumRows);
return entity;
}

As you see, the startRowIndex and maximumRows are used for paging. If you do not wish to manage paging, you may omit these two arguments.

To bind your UI control (i.e. GridView) to this method, put an ObjectDataSource on your web form. Then, choose Configure DataSource from it’s smart tag. The Configure Data Source window will open:

Check the “Show only data components” and open the drop down list beside it. If nothing is seen, you have to build the BusinessLayer.Dll and put it in your website’s Bin folder.

Select your business class and click Next. At the next step, you have to specify which methods are used for Select, Insert, Update and Delete operations. The following image is an example :

After choosing all your methods, click Finish to close this dialog. If you are going to let the ObjectDataSource handle the paging operation, go to your page’s source code, find the code of GridView control and remove the SelectParameters. This is necessary because startRowIndex and maximumRows will be provided by ObjectDataSource control. If you do not remove them, you will get an error message.

In order to activate paging in ObejctDataSource, you have to set its EnablePaging property to True. you also have to set the AllowPaging property of your GridView control to true. In order for ObjectDataSource to do paging operation correctly, we have to tell it how many records are in database. Therefore, there must be a public method in our business class that return the total number of rows in DB. What we need is to assign it’s name to SelectCountMethod property of ObjectDataSource. You do not need to flag this property with an attribute:

public int GetTotalRecordCount()
{
return innerDAL.GetTotalRecordCount();
}

Notice that if you are filtering the results of your SelectMethod, for example by passing a filter string to it, you have to perform the same filter on the result of SelectCountMethod.

Now build and run your website. you will see that the data is being shown and the grid supports sorting and paging very well. To ensure that paging is working correctly, run Microsoft Sql Server Profiler, and check to see if the Select SQL statement has correct startRowIndex and maximumRowCount values.

To insert a new record, you may create a new entity (i.e. PersonEntity) , fill it and save it to DB.

To update a record, retrieve the record from database using it’s Primary Key, edit it and save it back to db.

To delete a record, perform a delete operation on database using the records PK.

But how to obtain the record’s Primary Key? You may save it somewhere in UI when the UI control is getting bound. To do this, I usually write such a code in RowDataBound event :

if (e.Row.RowType == DataControlRowType.DataRow)
{
int Serial = DataBinder.Eval(e.Row.DataItem, “Serial”);
e.Row.Cells[0].Attributes.Add(“Serial”, Serial.ToString());
}

In the above code, if the row being bound is a data row (not footer or header), we get the value of “Serial” column and store it in the 1st cell of the current row. Notice that whatever toy add yo .Attributes collection will be available to the rendered html page. Thus, never save passwords or other sort of crucial data with this method.

Now, for each row, you have the Primary Key value. Therefore, we may perform update and delete operations easily. Not keeping the whole data in memory (or in ViewState), increases the access to data base, but reduces the usage of server’s resources. However, it is up to you and your project’s conditions to decide that method should be used.

You may download a full example of this implementation HERE. Please note that the file is password protected and it’s password is : aspguy.wordpress.com

Bye for now..


Implementing a 3-Tier architecture with C# – Part 4



Up to now, we have finished implementing the data access layer. However, we can reduce the dependency between the deriver DAL classes and Web.config file. As you remember, we have specified the DB provider name in Web.config file so that DAL classes must have access to this file to read the provider name. Another method to specify the provider type is using custom attributes.

Firstly, in the DataAccessLayerBase namespace, we declare a public enum type named ProviderType:

public enum ProviderType
{
SqlServer,
Odbc,
OleDb
}

We also remove the ProviderName property from DALHelper class, and instead, add a new internal method, as bellow, to get a string representation of each ProviderType member:

internal static string GetProviderTypeName(ProviderType providerType)
{
switch (providerType)
{
case ProviderType.SqlServer: return “System.Data.SqlClient”;
case ProviderType.Odbc: return “System.Data.Odbc”;
case ProviderType.OleDb: return “System.Data.OleDb”;
default: return “System.Data.SqlClient”;
}
}

The custom attribute we need for specifying the provider type, must have a public property of type ProviderType. We also add a constructor, with a positional parameter of type ProviderType, to our custom attribute class:

[AttributeUsage(AttributeTargets.Class)]
public sealed class DbProviderTypeAttribute : Attribute
{
public ProviderType PrType
{
set;
get;
}
}

The AttributeUsage attribute indicates that this custom attribute can be applied to classes only.

To make this attribute effective, we need to alter the base DAL class’s constructor as bellow:

first we must check to see if the DbProviderTypeAttribute attribute has been applied to the derived DAL class:

if (!this.GetType().IsDefined(typeof(DbProviderTypeAttribute), false))
throw new System.Exception(“DbProviderTypeAttribute must be applied to DAL class”);

The above code examines the existence of DbProviderTypeAttribute and throws an exception of it does not exist.

Then, the following piece of code, extracts the instance of DbProviderTypeAttribute from the class’s metadata, and uses the value of it’s PrType property to set the ProviderName :

List<object> t = this.GetType().GetCustomAttributes(typeof(DbProviderTypeAttribute), false).ToList();
DbProviderTypeAttribute Provider = t[0] as DbProviderTypeAttribute;
ProviderName = DALHelper.GetProviderTypeName(Provider.PrType);
_ProviderFactory = DbProviderFactories.GetFactory(ProviderName);

That’s it. This way we have reduced the coupling between our UI (web.config file) and the DAL class. When developing a derived DAL class, we have to apply DbProviderType attribute to it:

[DbProviderType(PrType=ProviderType.SqlServer)]
public class PersonDAL : DALBase
{ …

Implementing the business classes

The implementation of business classes are fairly easy. A business class is a bridge between the presentation layer and the data access layer, perform business checks, controls concurrencies and even might control the business transactions.

The business classes can be either static or instance classes. Each business method can create one or more instances of the required DAL classes. However, since each DAL class has it’s own connection object (innerConnection property), system transactions would better be controlled in business methods. This may even solve the issue that is raised when a layered architecture needs to control both business and system transactions.

The easiest way to control the transactions, is using System.Transaction name space and TransactionScope class. The illustration of TransactionScope requires an indipendent post, but for now, lets say that transaction begins when an instance of TransactionScope class is created and it is commited when the .Complete() method is called:

using (TransactionScope scope = new TransactionScope())

{

…..

scope.Complete();

}

if an exception is thrown, the transaction will roll back :

using (TransactionScope scope = new TransactionScope())

{

PersonDAL PDAL= new PersonDAL();

….

ChildDAL CDAL= new ChildDAL();

PDAL.Update(personDT);

CDAL.Update(childDT);

scope.Complete();

}

Note that MSDTC service must be running on the server machine.

so far, the definition of business classes requires no additional settings. however, when developing the presentaion layer, we will come back and make some minor changes to make our bisuness classes compatible with UI components.

A sample business class might look like this:

public class PersonBiz
{

private PersonDAL innerDAL = new PersonDAL();
public DataTable FetchAll(int startRowIndex, int maximumRows)
{
PersonEntity entity = new PersonEntity();
innerDAL.Fill(entity, startRowIndex, maximumRows);
return entity;
}

public void Update(System.Data.DataTable table)
{
// validate fields/perform business checks and throw an exception if a criteria is not met
innerDAL.Update(table);
}
:

:

}