Building high-performance ASP.NET applications


If you are building public facing web sites one of the things you want to achieve at the end of the project is a good performance under the load for your web site. That means, you have to make sure that your product works under a heavy load (e.g. 50 concurrent users, or 200 users per second etc.) even though at the moment you don’t think you would have that much load. Chances are that the web site attracts more and more users over time and then if it’s not a load tolerant web site it will start flaking, leaving you with an unhappy customer and ruined reputation.

There are many articles on the Internet about improving the performance of ASP.NET web sites, which all make sense; however, I think there are some more things you can do to save yourself from facing massive dramas. So what steps can be taken to produce a high-performance ASP.NET or ASP.NET MVC application?

  • Load test your application from early stages

Majority of developers tend to leave performing the load test (if they ever do it) to when the application is developed and has passed the integration and regression tests. Even though performing a load test at the end of the development process is better than not doing it at all, it might be way too late to fix the performance issues once your code has been already been written. A very common example of this issue is that when the application does not respond properly under load, scaling out (adding more servers) is considered. Sometimes this is not possible simply because the code is not suitable for achieving it. Like when the objects that are stored in Session are not serializable, and so adding more web nodes or more worker processes are impossible. If you find out that your application may require to be deployed on more than one server at the early stages of development, you will do your tests in an environment which is close to your final environment in terms of configuration and number of servers etc., then your code will be adapted a lot easier.

  • Use the high-performance libraries

Recently I was diagnosing the performance issues of a web site and I came across a hot spot in the code where JSON messages coming from a third-party web service had to be serialized several times. Those JSON messages were de-serialized by Newtonsoft.Json and tuned out that Newtonsoft.Json was not the fastest library when it came to de-serialization. Then we replaced Json.Net with a faster library (e.g. ServiceStack) and got a much better result.

Again if the load test was done at an early stage when we picked Json.Net as our serialization library we would have find that performance issue a lot sooner and would not have to make so many changes in the code, and would not have to re-test it entirely again.

  • Is your application CPU-intensive or IO-intensive?

Before you start implementing your web site and when the project is designed, one thing you should think about is whether your site is a CPU-intensive or IO-intensive? This is important to know your strategy of scaling your product.

For example if your application is CPU-intensive you may want to use a synchronous pattern, parallel processing and so forth whereas for a product that has many IO-bound operations such as communicating with external web services or network resources (e.g. a database) Task-based asynchronous pattern might be more helpful to scale out your product. Plus you may want to have a centralized caching system in place which will let you create Web Gardens and Web Farms in future, thus spanning the load across multiple worker processes or serves.

  • Use Task-based Asynchronous Model, but with care!

If your product relies many IO-bound operations, or includes long-running operations which may make the expensive IIS threads wait for an operation to complete, you better think of using the Task-based Asynchronous Pattern for your ASP.NET MVC project.

There are many tutorials on the Internet about asynchronous ASP.NET MVC actions (like this one) so in this blog post I refrain from explaining it. However, I just have to point out that traditional synchronous Actions in an ASP.NET (MVC) site keep the IIS threads busy until your operation is done or the request is processed. This means that if the site is waiting for an external resource (e.g. web service) to respond, the thread will be busy. The number of threads in .NET’s thread pool that can be used to process the requests are limited too, therefore, it’s important to release the threads as soon as possible. A task-based asynchronous action or method releases the thread until the request is processed, then grabs a new thread from the thread pool and uses it to return the result of the action. This way, many requests can be processed by few threads, which will lead to better responsiveness for your application.

Although task-based asynchronous pattern can be very handy for the right applications, it must be used with care. There are a few of concerns that you must have when you design or implement a project based on Task-based Asynchronous Pattern (TAP). You can see many of them in here, however, the biggest challenge that developers may face when using async and await keywords is to know that in this context they have to deal with threads slightly differently. For example, you can create a method that returns a Task (e.g. Task<Product>). Normally you can call .Run() method on that task or you can merely call task.Result to force running the task and then fetching the result. In a method or action which is built based on TBP, any of those calls will block your running thread, and will make your program sluggish or even may cause dead-locks.

  • Distribute caching and session state

    It’s very common that developers build a web application on a single development machine and assume that the product will be running on a single server too, whereas it’s not usually the case for big public facing web sites. They often get deployed to more than one server which are behind a load balancer. Even though you can still deploy a web site with In-Proc caching on multiple servers using sticky session (where the load balancer directs all requests that belong to the same session to a single server), you may have to keep multiple copies of session data and cached data. For example if you deploy your product on a web farm made of four servers and you keep the session data in-proc, when a request comes through the chance of hitting a server that already contains a cached data is 1 in 4 or 25%, whereas if you use a centralized caching mechanism in place, the chance of finding a cached item for every request if 100%. This is crucial for web sites that heavily rely on cached data.

    Another advantage of having a centralized caching mechanism (using something like App Fabric or Redis) is the ability to implement a proactive caching system around the actual product. A proactive caching mechanism may be used to pre-load the most popular items into the cache before they are even requested by a client. This may help with massively improving the performance of a big data driven application, if you manage to keep the cache synchronized with the actual data source.

  • Create Web Gardens

As it was mentioned before, in an IO-bound web application that involves quite a few long-running operations (e.g. web service calls) you may want to free up your main thread as much as possible. By default every web site is run under one main thread which is responsible to keep your web site alive, and unfortunately when it’s too busy, your site becomes unresponsive. There is one way of adding more “main threads” to your application which is achievable by adding more worker processes to your site under IIS. Each worker process will include a separate main thread therefore if one is busy there will be another one to process the upcoming processes.

Having more than one worker process will turn your site to a Web Garden, which requires your Session and Application data be persisted out-proc (e.g. on a state server or Sql Server).

  • Use caching and lazy loading in a smart way

    There is no need to emphasize that if you cache a commonly accessed bit of data in memory you will be able to reduce the database and web service calls. This will specifically help with IO-bound applications that as I said before, may cause a lot of grief when the site is under load.

    Another approach for improving the responsiveness of your site is using Lazy Loading. Lazy Loading means that an application does not have a certain piece of data, but it knows that where is that data. For example if there is a drop-down control on your web page which is meant to display list of products, you don’t have to load all products from the database once the page is loaded. You can add a jQuery function to your page which can populate the drop-down list the first time it’s pulled down. You can also apply the same technique in many places in your code, such as when you work with Linq queries and CLR collections.

  • Do not put C# code in your MVC views

    Your ASP.NET MVC views get compiled at run time and not at compile time. Therefore if you include too much C# code in them, your code will not be compiled and placed in DLL files. Not only this will damage the testability of your software but also it will make your site slower because every view will take longer to get display (because they must be compiled). Another down side of adding code to the views is that they cannot be run asynchronously and so if you decide to build your site based on Task-based Asynchronous Pattern (TAP), you won’t be able to take advantage of asynchronous methods and actions in the views.

    For example if there is a method like this in your code:

    public async Task<string> GetName(int code)

    {

    var result = …

    return await result;

    }

This method can be run asynchronously in the context of an asynchronous ASP.NET MVC action like this:

    public Task<ActionResult> Index(CancellationToken ctx)

{

    var name = await GetName(100);

}

But if you call this method in a view, because the view is not asynchronous you will have to run it in a thread-blocking way like this:

var name = GetName(100).Result;

.Result will block the running thread until GetName() processes our request and so the execution of the app will halt for a while, whereas when this code is called using await keyword the thread is not blocked.

  • Use Fire & Forget when applicable

If two or more operations are not forming a single transaction you probably do not have to run them sequentially. For example if users can sign-up and create an account in your web site, and once they register you save their details in the database and then you send them an email, you don’t have to wait for the email to be sent to finalize the operation.

In such a case the best way of doing so is probably starting a new thread and making it send the email to the user and just get back to the main thread. This is called a fire and forgets mechanism which can improve the responsiveness of an application.

  • Build for x64 CPU

32-bit applications are limited to a lower amount of memory and have access to fewer calculation features/instructions of the CPU. To overcome these limitations, if your server is a 64-bit one, make sure your site is running under 64-bit mode (by making sure the option for running a site under 32-bit mode in IIS is not enabled). Then compile and build your code for x64 CPU rather than Any CPU.

One example of x64 being helpful is that to improve the responsiveness and performance of a data-driven application, having a good caching mechanism in place is a must. In-proc caching is a memory consuming option because everything is stored in the memory boundaries of the site’s application pool. For a x86 process, the amount of memory that can be allocated is limited to 4 GB and so if loads of data be added to the cache, soon this limit will be met. If the same site is built explicitly for a x64 CPU, this memory limit will be removed and so more items can be added to the cache thus less communication with the database which leads to a better performance.

  • Use monitoring and diagnostic tools on the server

    There might be many performance issues that you never see them by naked eyes because they never appear in error logs. Identifying performance issues are even more daunting when the application is already on the production servers where you have almost no chance of debugging.

    To find out the slow processes, thread blocks, hangs, and errors and so forth it’s highly recommended to install a monitoring and/or diagnostic tool on the server and get them to track and monitor your application constantly. I personally have used NewRelic (which is a SAS) to check the health of our online sites. See HERE for more details and for creating your free account.

  • Profile your running application

    Once you finish the development of your site, deploy it to IIS, and then attach a profiler (e.g. Visual Studio Profiler) and take snapshots of various parts of the application. For example take a snapshot of purchase operation or user sign-up operation etc. Then check and see if there is any slow or blocking code there. Finding those hot spots at early stages might save you a great amount of time, reputation and money.

Advertisements

Web API and returning a Razor view


There are scenarios in which an API in a WEB API application needs to return a formatted HTML rather than a JSON message. For example we worked on a project where APIs are used to perform some search and return the result as JSON or XML while a few of them had to return HTML to be used by an Android app (in a web view container).

One solution would be breaking down the controller into two: one inherited from MVCController and the other one derived from ApiController. However since those APIs are in a same category in terms of functionality I would keep them in the same controller.

Moreover,  using ApiController and returning HttpResponseMessage lets us to modify the details of the implementation in future without having to change the return type (e.g. from ActionResult to HttpResponseMessage) and also would be easier for us in future to upgrade to Web API 2.

The advent of IHttpActionResult In Web API 2 allows developers to return custom data. In case you are not using ASP.NET MVC 5 yet or you are after an easier way keep reading!

To parse and return a Razor view in a WEB API project, simply add some views to your application just like when you do it for a normal ASP.NET MVC project. Then through Nuget, find and add RazorEngine which is a cool tool to read and parse Razor views.

Inside the api simply create an object to act as a model, load the content of the view as a text data, pass the view’s body and the model to RazorEngine and get a parsed version of the view. Since the api is meant to return HTML, the content type must be set to text/html.

Image

In this example the view has a markup like the one given below:

Image

As it’s seen, the model is bound to type “dynamic” which let’s the view accept a wide range of types. You can move the code from your API to a helper class (or anything similar) and create a function which accepts a view name, a model and then returns a rendered HTML.

Pluggable modules for ASP.NET


When you design a modular ASP.NET application, soon or later you will need to think about adding extensibility features to your project so that it will be possible to add new modules at runtime.  There are a few architectures and designs that let you develop an extensible application, like ASP.NET MVP. However, many of them add a lot of complexity almost to everything and one should learn many concepts to use them.  Therefore, it’s a good idea to use other simple but innovative methods like the one I will explain bellow.

The method I am going to mention lets you develop an ASP.NET application and add some more modules to it later at runtime. In a nutshell, it has the following benefits:

  1. Allows adding new pages to an existing web application at runtime and does not need any recompilation.
  2. Allows adding new web parts to an existing content management system (Portal) at run-time.
  3. Several developers can develop different parts of an application.
  4. It is very easy to understand, develop and use.
  5. Does not exploit any 3rd party library so nothing is needed except Visual Studio.

And it has the following drawbacks:

  1. One should know the exact structure of an existing ASP.NET application, like folder hierarchies.
  2. May not cover all possible scenarios (Actually I have not taught about many scenarios).

How to implement it?

The design I am going to explain is possible only if you develop an ASP.NET Web Project rather than an ASP.NET web site.  As far as I remember, visual studio 2005 does not let you create a web project. If I am right, you need to use Visual Studio 2008. However, there are two main parts that we need to develop:

  • A web application project that includes main modules, main pages, and loads plugged modules, checks licensing, perform security tasks etc.
  • Plugged modules, which will add more pages, web parts and functionalities.

Main application and modules must match. It means that they must have same structure (i.e. folders), use same master pages and follow same rules.

The main reason that I used a Web Application Project, rather than a Web Site, was the benefits of a Web Application Project for developing a plug-in based web site. After building a web application project, there will be one assembly and several .aspx, .ascx, .ashx … files. After the web application is published, there is possibility to add more pages and files to it. Therefore, if at a later time we add several .aspx pages along with their .dll files, the web application will be able to work with those pages with no problem.

When developing the main application, you should consider a well formed directory structure, language specific contents, master pages etc. For example, your application should have a master page with a general name, like Site.Master. It also needs to maintain each module’s pages in a separate folder so that new modules can follow the same rule and avoid naming conflicts etc.

To develop the main application, follow the steps bellow:

  1. Create an empty solution in VS 2008.
  2. Add a new ASP.NET Web Project (not a web site) to the solution.
  3. Add any required folders like App_Themes and implement any required authentication, authorization and personalization mechanisms. Your web application must be complete and working.
  4. Add a master page to the web application project and name it Site.Master or another general name.
  5. Add a new Class Library Project and call it Framework (i.e. mycompany.myproject.Framework), common or whatever name that indicates this class library will be shared between the main application and dynamic modules.
  6. Add a new interface to the mentioned class library and call it IModuleInfo. This interface will be implemented with a class inside any pluggable module and will return root menu items that must be added to main application’s menu (or items to be added to a site navigation). It also can return a list of WebParts that introduces web parts that exist inside the module.

public interface IModuleInfo

{

List<MenuItem> GetRootMenuItems(string[] UserRoles);

}

UserRoles parameter is not mandatory. It shows that you can pass parameters to the method that returns a module’s main menu items. In this example, it indicates which Roles the current user has so that menu items will be filtered properly.

  1. Add a new ASP.NET Web Application project to the solution and name it SampleModule.
  2. Add a folder called SampleModule and if necessary, add more sub-folders.
  3. Add a web.config file to SampleModule folder and define which users/roles can access which folder.
  4. Add a master page named Site.Master. In fact , it must have same name with your master page in the main application.
  5. Add a public class with any name (I call it ModulePresenter) that implements IModuleInfo (this interface was added to Common or Framework library).

ModulePresnter class will return a list me menu items to main application. Main application will add those menu items as root items to its main menu afterwards. I will not bring a detailed code for the part that a module creates these items; it is dependent on your project.

public class ModulePresenter : IModuleInfo

{

#region IModuleInfo Members

public List<System.Web.UI.WebControls.MenuItem> GetRootMenuItems(string[] UserRoles)

{

List<MenuItem> items = new List<MenuItem>();

//:

//:

return items;

}

#endregion

}

  1. Compile this application and go back to the main application.
  2. Add an XML file and call it PluggedModules.xml. This file maintains the qualified type name of each module that must be loaded. A qualified type name includes assembly, namespace and class name

<?xml version=”1.0″ encoding=”utf-8″ ?>

<modules>

<module name=”SampleModule” type=” SampleModule.ModulePresenter, SampleModule.dll”></module>

</modules>

  1. Write a code to query PluggbedModules.xml, get menu items and attach them to main menu:

public static void LoadModules(Menu menuControl , string[] userRoles, string xmlName)

{

XDocument document = XDocument.Load(HttpContext.Current.Server.MapPath(string.Format(“~/{0}” , xmlName)));

var allModules = document.Elements(“modules”);

foreach(XElement module in allModules.Elements())

{

string type = module.Attribute(“type”).Value;

IModuleInfo moduleInfo = Activator.CreateInstance(Type.GetType(type)) as IModuleInfo;

List<MenuItem> allItems = moduleInfo.GetRootMenuItems(userRoles);

foreach(MenuItem item in allItems)

{

menuControl.Items.Add(item);

}

}

}

As seen in the above code, we query PluggedModule.xml file , extract introduced files and create an instance of it using Activator.CreateInstance method. Then extract IModuleInfo implementation, call GetRootMenuItems to get module’s menu items and add it to main menu.

After doing all the above steps, copy modules .dll file (generated after you build the project) to main application’s \bin folder and add it’s main folder (SampleModule) to main application’s root folder. It will work fine until all naming matches (for example both use master pages with a same name) and when specifying target URL in menu items, they point to a relative path, i.e. SampleModule/MyPage.aspx.

Please download the sample code from here.

Creating a Captcha control – Part:2


In this post I will explain how to generate a hard-to-read image out of our Captcha text. The new Captcha with image will look like this:

Final Captcha image

In MyCaptcha control, each letter is different from other ones in three properties:

1- Font

2- Size

3- Distance from the next letter (character spacing)

Therefore, I wrote a class named Letter that each instance of it holds a character along with all its properties like it’s font name and size. The class has a constructor that accepts a character argument and assigns random properties to it:

public class Letter

{

string[] ValidFonts = {“Segoe Script”, “Century”, “Eccentric Std”,“Freestyle Script”,“Viner Hand ITC”};

public Letter(char c)

{

Random rnd = new Random();

font = new Font(ValidFonts[rnd.Next(ValidFonts.Count()-1)], rnd.Next(20)+20, GraphicsUnit.Pixel);

letter = c;

}

public Font font

{

get;

private set;

}

public Size LetterSize

{

get

{

var Bmp = new Bitmap(1, 1);

var Grph = Graphics.FromImage(Bmp);

return Grph.MeasureString(letter.ToString(), font).ToSize();

}

}

public char letter

{

get;

private set;

}

public int space

{

get;

set;

}

}

As you see in the above source code, I pick a random font name from ValidFonts array. The font names in ValidFonts array are Windows Vista fonts. You must keep in mind that you use font names that exist on your Web Server. I also recommend you to use fantasy fonts (like gothic) to make the produced image more hard-to-read.

I also have added a property of type Size to get the width and height of the letter when it is rendered with its own font. To get the character size I use Graphics.MeasureString method.

The ‘space’ property is set when the Captcha text is being rendered. To render the captcha image, I use a generic handler (.ashx) file. Using a generic handler we can render any output type and send it to the output stream. An .ashx file can be treated like an .aspx file but has noticeably fewer overhead. For example we can pass query strings to it and generate the output based on it.

I will send the captcha text as a query string called CaptchaText to GetImgText.ashx generic handler. In the .ashx code, I will make an instance of Letter class for each character.

var CaptchaText = context.Request.QueryString[“CaptchaText”];

if (CaptchaText != null)

{

List<Letter> letter = new List<Letter>();

int TotalWidth = 0;

int MaxHeight = 0;

foreach (char c in CaptchaText)

{

var ltr = new Letter(c);

int space = (new Random()).Next(5) + 1;

ltr.space = space;

letter.Add(ltr);

TotalWidth += ltr.LetterSize.Width+space;

if (MaxHeight < ltr.LetterSize.Height)

MaxHeight = ltr.LetterSize.Height;

System.Threading.Thread.Sleep(1);

}

As the above piece of code shows, all Letter instances are stored in letter generic list. The width of each letter plus its distance with the next letter is summarized in TotalWidth local variable. We also get the height of the biggest letter so that we get sure all letters fit in the captcha image.

I also have two constants for vertical and horizontal margins:

const int HMargin = 5;

const int VMargin = 3;

Thus, our image will have a size of VMargin+MaxHeight and HMargin+TotalWidth:

Bitmap bmp = new Bitmap(TotalWidth + HMargin, MaxHeight + VMargin);

var Grph = Graphics.FromImage(bmp);

At next step, I will draw each letter with it’s own font size and position it according to it’s space property:

int xPos = HMargin;

foreach (var ltr in letter)

{

Grph.DrawString(ltr.letter.ToString(), ltr.font, new SolidBrush(Color.Navy), xPos, VMargin );

xPos += ltr.LetterSize.Width + ltr.space;

}

Now the image is cooked and ready! We should send it to the output stream:

bmp.Save(context.Response.OutputStream, System.Drawing.Imaging.ImageFormat.Jpeg);

Up to now the Captcha text is sketched well but I’d like to smudge it a little more in order to make it more hard-to-read. To do is I will draw a number of circles on random positions of the image. One nice idea is to define an IShape interface with a method named Sketch(). Then define different classes that implements IShape and draws different shapes on the image.

By the way, the code that smudges the image is this:

Color[] Colors = { Color.Gray , Color.Red, Color.Blue, Color.Olive };

for (int i = 0; i < 200; i++)

{

var rnd = new Random();

var grp = Graphics.FromImage(bmp);

grp.DrawEllipse(new Pen(Colors[rnd.Next(3)]), rnd.Next(bmp.Width – 1), rnd.Next(bmp.Height – 1), 5, 5);

System.Threading.Thread.Sleep(1);

}

You can download the source code of this control from HERE. The password of archived file is: aspguy.wordpress.com

There is also a project for this control at CodePlex. You can upload any changes you might make on the source code.

Have fun with Captchaing!

Aref Karmi

9 Apr 2009

Creating a Captcha control – Part:1


In this post and a post after I will explain how to develop a Captcha control and use it in an ASP.NET web site.

As described in Wikipedia,  A CAPTCHA or Captcha (IPA: /ˈkæptʃə/) is a type of challenge-response test used in computing to ensure that the response is not generated by a computer. In a crude term, a captcha control shows some hard-to-read letters on the screen and asks the user to enter the text in a box. Then checks to see whether the entered text is correct or not.

There are a lot of Captcha controls for ASP.NET that you can download and use in your project but, none is as interesting as the one that you write yourself and know exactly how it works!

I will explain the techniques of writing a simple but powerful Captcha control in two posts because it is made of two parts:

  1. The captcha control that displays a text and asks users to enter text in a box then validates it.
  2. The class that renders the Captcha text as a hard-to-read image.

Because the 2nd part can be reused for different purposes, for example in your own Captcha control, I will describe it in a different part.

OK let’s get started. The Captcha control that we are going to write has the following specs:

  1. Is developed as an .ascx control so it is reusable in every ASP.NET website.
  2. Saves nothing on your hard-disk so if you open a same page in two different windows (or tabs) there will be no conflict between two Captcha controls.
  3. Is very easy to use and needs no complicated concept like understanding and using http handlers.

The only drawback that I can count about this control is that it stores the generated text in ViewState thus, you must always encrypt it.

To create this control first I new a website. Then, I add a .ascx (web user control) to it called MyCaptcha. It looks like this:

Captcha xhtml script

In the above code,  lbltext displays the captcha text. txtCpatcha is a text box in which the user must enter the code. There is also a button named btnTryNewWords that re-generates the code if user has difficulties in reading it.

The control looks like this in design mode:

c2

The control has a property named LetterCount that specifies the number of letters in the code:

public

int LetterCount { get; set; }

It also has a private property that holds the generated key in ViewState.

private

string GeneratedText{

get{

return ViewState[this.ClientID + “text”] != null ?ViewState[

this.ClientID + “text”].ToString() : null;}

set

{

// Encrypt the value before storing it in viewstate.

ViewState[this.ClientID + “text”] = value;

As commented in the above code, you are highly recommended to encrypt the key before you put it in ViewState.

To generate the captcha text I wrote a public method called TryNew() that picks random letters from a list of characters and combine them to each other:

public void TryNew()
{
char[] Valichars = {‘1′,’2′,’3′,’4′,’5′,’6′,’7′,’8′,’9′,’0′,’a’,’b’,’c’,’d’,’e’,’f’,’g’,’h’,’i’,
‘j’,’k’,’l’,’m’,’n’,’o’,’p’,’q’,’r’,’s’,’t’,’u’,’v’,’w’,’x’,’y’,’z’ };
string Captcha = “”;
int LetterCount = MaxLetterCount >5 ? MaxLetterCount : 5;
for (int i = 0; i < LetterCount; i++)
{
int index = new  Random(DateTime.Now.Millisecond).Next(Valichars.Count()-1);
Captcha += Valichars[index].ToString().ToUpper();
Thread.Sleep(1);
}
GeneratedText = Captcha;
lbltext.Text = Captcha;
}

Because the captcha control won’t be case-sensitive, there is no capital letter in ValidChars array.

You also may have noticed the Thread.Sleep(1) statement in the above code! Why should we stop the running thread for one millisecond?!

The answer is that Random class uses DateTime.Now.Millisecond as it’s default seed. If the computer is too fast and the loop that selects and combines the letters is too short (for example 5 loops only) , the loop begins and ends in a millisecond or even a shorter period of time. Therefore, you will get equal letters only. To see and feel what I mean (!) remove Thread.Sleep(1) line and you will get a text like AAAAAA or 333333.

Anyway, there is also another public property called IsValid that indicates if the entered text is equal to captcha text:

public

bool IsValid{

get

{

bool result = GeneratedText.ToUpper() == TxtCpatcha.Text.Trim().ToUpper();

if (!result)TryNew();

return result;

}

}

That’s it. Now to test the control , drag MyCaptcha.ascx and drop it on Default.aspx page.  Your .aspx page will be like this:

<

uc1:MyCaptcha ID=”MyCaptcha1″ runat=”server” />

<br />

<asp:Label ID=”lblCheckResult” runat=”server” Text=”?”></asp:Label>

<br />

<asp:Button ID=”btnCheck” runat=”server” onclick=”btnCheck_Click”

Text=”Check it!” />

in btnCehck_Click event handler, write the following piece of code:

if

(MyCaptcha1.IsValid)lblCheckResult.Text =

“It is ok” ;

else

lblCheckResult.Text =

“oops!, invalid text was entered.” ;

After running the website, you will have a captcha control like the image bellow:

c3

In the next post I will show you how to scramble the text in a way that only a human with a high eye-sight can read it 🙂

I also might add some extra codes so that it can work along with other Validation controls on the page.

Desktop Sharing!


In some applications, there is a need to desktop/application sharing, whiteboard etc. Microsoft Office Communications Server 2007 is a great solution for you in case you need Video Conferencing, Desktop/App. sharing, whiteboard and so on. Of course, only MSCS R2 Web Access which can be installed on a x64 OS is suitable for web-based desktop sharing.

Since MSCS is a sort of expensive server and it will make your customer to pay an arm and a leg to purchase required licenses, you may need a simpler and cheaper solution. For instance, in a web-based Learning Management System (LMS) application, there was a need to enabled desktop sharing for teachers. Only teachers would share their desktop and there was no mutual interaction  between the instructor and learners. As you may guess, the customer could not afford the cost of MSCS licenses.

To answer this need, I used Microsoft Media Services and Microsoft Media Encoder. Using these two services, the instructor was able to capture his/her computer’s screen and broadcast it over the network so that students could see what he/she was doing.

What you need:

1- Windows 2003 Server

2- Windows Media Encoder

3-Windows Media Player

To install Windows Meddia Services, on the computer which should be your streamign server, go to Control Panel -> Add Remover Programs -> Add/Remove Windows Component. Then at the buttom of  Windows Component Wizard dialogbox, check  Windows Media Services checkbox and click OK.

windows_media_srv

After installing Windows Media Services, you need to configure it. There are two things that should be configured:

  1. Authentication
  2. Protocol Control

Go to Administrative Tools ->Windows Media Services snap-in. Then click on the name of server computer and navigate to Properties tab. Find Authentication and open properties of each authentication method (if availible). Specify who can access this service and grant each user or group proper access rights.

authorization

Logically, students should have Read access but Teachers should have Create and Write access along with Read access because a teacher pushes the media content to media server.

At the next step you should enable control protocol and configure it. If your server is configured as a web server and has IIS installed, IIS is likely to occupy port 80. Therefore, you need to specify another open HTTP port thorough which Windows Media Encoder can push media content to server. If port 80 is used, WMS Http control protocol is disabled. right-click on it and choose Enabled item. Afterwards, open properties window and specify a new port number as shown in the picture bellow:

controlprotocol1

At this stage Windows Media Services is configured and ready to use.

On each instructor’s client computer, install Windows Media Encoder. To begin broadcasting, run it and from New Session window, go to Quick Starts tab and choose Broadcast Company meeting (then click next).

New Session Window

Then Windows Media Services Publishing Point window is appeared. In this window, enter server IP or name along with the Port number you specified in WMS Http Control Protocol window. You must also specify a name for your publishing point. For example, VisualC#1 !

Publishing point windowBy clicking on OK button, Windows Media Encoder begins and Properties window is seen. If not, click on “Properties” button on toolbar. In this place, for Video, choose Screeen Capture. You can click on Configure button and tell Windows Media Encoder how to capture your computer screen. Available options are: Entire Screen, Region or Window.

If you wish to broadcast your computer’s audio, enable Audio and select your preferred audio device.

Also, for “At End” option, choose “Stop”!

Properties window

At the next step go to Output tab, and uncheck these two options:

  1. Pull from encoder
  2. Archive file

Output

After doing this, go to Compression tab and from Destination drop-down, choose Windows Media Server option. Then from the Video drop-down control, select “Screen Capture (CBR)”. You can also change the Bit rate. The higher Bit tare meand the higher quality but slower transmition.

Compression

Ok! This process might be a little complicated and hard to memorise. No worries, go to File menue and save your configuration. You can provide your saved configuration file to your teachers and ask them to simply double-click on the file to begin recording (except they should change the name of publishing point)!

The Publishing Point name must be generated by your application and shown to the instructor to be used in Windows Media Encoder and also should be used in the client’s viewer for connecting to media server.

Clients can visit the streamed video by Windows Media Player. The protocol used for watching streamed videos is MMS, like http! for example , we may use mms://videoserver/myvideo

At the above address, Videoserver is server’s name and MyVideo is the publishing point name.

To create the client’s viewer we need to embed Windows Media Player ActiveX into our .aspx page. Visit this address to see how this is done:

http://www.mioplanet.com/rsc/embed_mediaplayer.htm

After embeding, URL parameter must be set properly. The best way is to write a function (with Internal access modifier) to generate the url and bind it to URL parameter:

<PARAM NAME=”URL” VALUE=”<%=GetMMSURL()%>”>

Each time the instructor begins broadcasting, students can open their viewing page and watch the deskop of their teacher! The teacher can display his/her desktop, open an application and show it (like in a Demo session) or open Microsoft Paint and use it as a white board!

The approach above is a one-way method unless you provide a facility for clients to send text messages to teacher and ask questions. However, for low incomers, this should be a reliving way of desktop viewing 🙂

Reading and writing images from/to database


Hi,
This post will show you how to save/load images to/from a database. This approach lets you do it without having to save the image on to disk. On a real hosting computer, you probably will not have write permissions. Therefore, it is sometimes vital to do image saving/loading on the fly.

In this post we will work on a database named Personnel, in which there is a table called Personnel:

create table Personnel
(
id int identity(1,1) not null primary key,
Fillname varchar(100) not null,
Picture image null
)

We are going to store out personnel’s name and picture into this table. On the application side, create a .aspx page with the following four controls:
1-a TextBox
2-a RequiredFieldValidator
3-a FileUpload control
4-a Button
5-a DataList control

Your page will look like this:

As the above image indicates, we will save each person’s full name and picture to database and the personnel information will be viewd in a DetailList control.

Behind the AddRecord button, we read the image into a Byte[] object. Then transmit the bytes to database. I strongly recommend you to use a stored procedure (if you are working with ADO.NET) because you won’t face any problems with converting bytes to string.

In Asp.NET there is a class claeed HttpPostedFile. This class lets us get full information about the file that is about to upload.  The code bellow show how we read the file:

if (FileUpload1.HasFile)
{
HttpPostedFile myFile = FileUpload1.PostedFile;
int Length = myFile.ContentLength;
string ContentType = myFile.ContentType.ToUpper();

if (Length == 0)
throw new Exception(“File size must be greater than zero!”);
if (ContentType.CompareTo(“IMAGE/PJPEG”) != 0 && ContentType.CompareTo(“IMAGE/JPEG”) != 0)
throw new Exception(“Only JPEG files are welcome!”);

Byte[] myFileBytes = new byte[Length];
myFile.InputStream.Read(myFileBytes, 0, Length);

In the above code I just allow JPEG files to be uploaded. Actually, I wanted to show you how to control the image type and infact, you can upload any file type.

As the last line of this code shows, HttpPostedFile class contains an inner stream from which we can read the image bytes. After reading the image file, we simply save it into db. I have created a stored procedure for doing this and I call this procedure from my C# code:

ALTER PROCEDURE dbo.SavePersonnel
@FullName Varchar(100),
@Picture Image

AS
Insert into Personnel (FullName,Picture) Values (@FullName,@Picture )
Return

C#:

string cnStr = ConfigurationManager.ConnectionStrings[“ConnectionString”].ToString();
using (SqlConnection connection = new SqlConnection(cnStr))
{
SqlCommand cmd = connection.CreateCommand();
cmd.CommandText = “dbo.SavePersonnel”;
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.AddWithValue(“FullName”, txtFullName.Text.Trim());
cmd.Parameters.AddWithValue(“Picture”, myFileBytes);
connection.Open();
cmd.ExecuteNonQuery();
}

Well, now how to read the image and show it using a System.Web.UI.WebControls.Image control. Unfortunately, despite System.Drawing.Image class, ASP.NET Image class does not provide a public stream. Therefore we can not create the image content withouth an image url.

Thanks to ASP.NET Generic Handlers, we can overcome this problem easily. We may develop a .aspx or .ashx (generic handler) file and use it to read the file from database and write the image bytes into the response output stream. A generic handler (.ashx) file is much more light weight than a .aspx file. Therefore, I will develop a .ashx file. We will pass the ID of each person to this .ashx file. The generic handler then finds the person at DB, reads the image and writes it to the current HttpContext instance :

Here is the full source code of the .ashx file:

<%@ WebHandler Language=”C#” Class=”GetPersonnelImage” %>

using System;
using System.Web;
using System.Data.SqlClient;

public class GetPersonnelImage : IHttpHandler {

public void ProcessRequest (HttpContext context) {
context.Response.ContentType = “image/jpeg”;
if (context.Request.QueryString[“id”] == null || context.Request.QueryString[“id”] == “”)
return;
else
{
string cnStr = System.Configuration.ConfigurationManager.ConnectionStrings[“ConnectionString”].ToString();
using (SqlConnection connection = new SqlConnection(cnStr))
{
SqlCommand cmd = connection.CreateCommand();
cmd.CommandText = “Select Picture from dbo.Personnel Where Id=” + context.Request.QueryString[“id”];
connection.Open();
SqlDataReader reader = cmd.ExecuteReader();
if (reader.HasRows)
{
reader.Read();
byte[] image = reader.GetValue(0) as byte[];
System.IO.MemoryStream ms = new System.IO.MemoryStream(image);
ms.WriteTo(context.Response.OutputStream);
ms.Flush();
context.Response.OutputStream.Flush();
}
}
}
}

public bool IsReusable {
get {
return true;
}
}
}

As is clear, we read the person’s record with a SqlDataReader and then get the image bytes using it’s GetValue method.

To let a Image control show the image we have to set it’s ImageUrl’s property in ths way:

Image img = new Image();

img.ImageUrl = “~/GetPersonnelImage.ashx?id=10″;

Inside a GridView or DataList, we can create a Template field with an Image control and bind the Image control’s ImageUrl property to the mentioned generic handler. For example:

<asp:Image ID=”Image1″ runat=”server” ImageUrl='<%#Eval(“id”,”GetPersonnelImage.ashx?id={0}”)%>’

You can download a full sample program from here. To run the project, you must have Sql Server 2005 Express edition on your machine. If you don’t have SQL Server 2005 Express, change the connection string existing in Web.config file to your own database.