Continue more from my previous post about Semaphore, today I want to share about how to use semaphore to limit the request to be processed by WCF. If you had read my previous post, I used lock and count check to limit the request process, today you will find the similar functionality but using different method to do so.
In my previous way, I have to set the WCF concurrency mode to be single in order to have only one thread to control all the incoming requests like this:
[ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode =ConcurrencyMode.Single)]
Hence, the requests are piling up in the IIS request queue, which the limit by default is 1000. When there are a lot of requests come in at the same time, potentially the queue will be full very quickly.
Now, I change the method to use Semaphore to handle the threads for me, and I can set the concurrency mode to multiple. You may refer to the original method code from HERE. The following are the changed code:
What the above code do is actually when a user call the service, the code will be blocked by semaphore first at the semaphore.Wait() method. And then, there is another separate long running thread which was initialized from the static constructor, the TimerTask, its job is to do the request count check and also being a timer. When a request comes in within the 1 second window and also within the request limit, the code execution will be released by the semaphore. Otherwise, it will do nothing and wait for the next second.
The following is my program used to do testing:
The test result is still similar to the previous post. Many requests come in, each request is processed in different thread, the code execution in every new thread is blocked and only allow 3 requests are being processed in every one second. There is one concern about semaphore, it does not guarantee FIFO (first in first out), the request that come in earlier may not being processed first. Unless, you implement a queue to handle it. So, do you find by using semaphore, it can actually achieve the same thing with lesser code?
If you are interested with my source code, feel free to download it from HERE.
In my previous way, I have to set the WCF concurrency mode to be single in order to have only one thread to control all the incoming requests like this:
[ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode =ConcurrencyMode.Single)]
Hence, the requests are piling up in the IIS request queue, which the limit by default is 1000. When there are a lot of requests come in at the same time, potentially the queue will be full very quickly.
Now, I change the method to use Semaphore to handle the threads for me, and I can set the concurrency mode to multiple. You may refer to the original method code from HERE. The following are the changed code:
[ServiceBehavior(InstanceContextMode
= InstanceContextMode.PerCall, ConcurrencyMode = ConcurrencyMode.Multiple)]
public class SampleService
: ISampleService
{
private static SemaphoreSlim
semaphore;
private static Stopwatch
watch;
static
SampleService()
{
//Set maximum tps as 3
semaphore = new SemaphoreSlim(0,
3);
watch = new Stopwatch();
watch.Start();
Task
timerTask = new Task(HandleTransactionLimit());
timerTask.Start();
}
private static Action
HandleTransactionLimit()
{
return async
() =>
{
int
counter = 1;
Task
sleep = Task.Delay(1000); //create
a async sleep task to sleep for 1 second
while
(true)
{
try
{
if
(counter <= 3)
{
//release
a thread if within the limit
semaphore.Release();
counter++;
}
else
{
//otherwise,
do nothing and wait for the sleep task complete the 1 second sleep
counter = 1;
await
sleep;
}
//when
the 1 second sleep is completed, create a new sleep task again
if
(sleep.IsCompleted)
sleep = Task.Delay(1000);
}
catch(Exception ex)
{
Debug.WriteLine(ex);
}
}
};
}
public string
GetData(string value)
{
semaphore.Wait();
//Do your process here
return string.Format("You
entered: {0}", value);
}
}
What the above code do is actually when a user call the service, the code will be blocked by semaphore first at the semaphore.Wait() method. And then, there is another separate long running thread which was initialized from the static constructor, the TimerTask, its job is to do the request count check and also being a timer. When a request comes in within the 1 second window and also within the request limit, the code execution will be released by the semaphore. Otherwise, it will do nothing and wait for the next second.
The following is my program used to do testing:
class Program
{
static void
Main(string[] args)
{
//Parallelly or concurrently
hitting my service
//to test whether the service
process more than 3 transactions per second
Parallel.For(0,
100, i =>
{
SampleServiceClient
proxy = new SampleServiceClient();
Console.WriteLine(string.Format(
"{0} Service
call {1} : {2}",
DateTime.Now.ToString("yyyy-MM-dd
hh:mm:ss.fff"),
i,
proxy.GetData("Test")));
});
Console.WriteLine("Press
any key to continue...");
Console.ReadKey();
}
}
The test result is still similar to the previous post. Many requests come in, each request is processed in different thread, the code execution in every new thread is blocked and only allow 3 requests are being processed in every one second. There is one concern about semaphore, it does not guarantee FIFO (first in first out), the request that come in earlier may not being processed first. Unless, you implement a queue to handle it. So, do you find by using semaphore, it can actually achieve the same thing with lesser code?
If you are interested with my source code, feel free to download it from HERE.
No comments:
Post a Comment