Showing posts with label .NET. Show all posts
Showing posts with label .NET. Show all posts

18/08/2017

Json.net also tricked me

Home



Recently, I was tricked by Roslyn, today by Json.NET. My bloody luck ;) Let's look at the following two very simple classes. Class A has one readonly property and I had to define a special constructor to allow Json.NET to set this property. B is also simple. It has one property, this time of type A with some default value.

14/08/2017

Roslyn tricked me again

Home



A while ago 2 of my SpecFlow tests started failing on the build server. At the same time, on a few local machines no problems were observed. I also didn't find any exceptions in the log, so I decided to log into the server and debug a problem there.

Quite soon I figured out that the problem is in the algorithm that uses Roslyn to analyse and understand the code. Here is a simplified code that finds all local variables within a method body and tries to determine their exact types.

20/04/2017

How I removed 50% of the code

Home

Title: Azuleyo tiles somewhere in Portugal, Source: own resources, Authors: Agnieszka and Michał Komorowscy


My last 2 posts were about problems with using Roslyn. Nonetheless, even if I sometime hate it, I'm still using it so the time has come to show some practical example of using Roslyn. Recently, I've been working on the task that can be summed up as: Take this ugly code and do something with it. i.e. more or less the refactoring task.

Now I'll give you some intuition of what I have to deal with. The code that I have to refactor was generated automatically based on XML schema. These were actually DTO classes used to communicate with the external service. Here are some statistics:
  • 28.7 thousands lines of code in 23 files.
  • 2200 classes and 920 enums.
  • Many classes and enums seems to me identical or very similar.

10/04/2017

Why I hate Roslyn even more

Home



In my previous post I wrote about my problem with "empty" projects and Roslyn. The symptom was that in some cases according to Roslyn my C# projects didn't contain any files. For quite a long time, I haven't been able to find a solution. Especially because I couldn't reproduce problem on my local machine. Fortunately, today I noticed exactly the same problem on another computer.

29/03/2017

Why I hate Roslyn

Home



The more I work with Roslyn the more I appreciate the possibilities it gives and the more I hate it. And I hate it for the same thing as many other projects I worked with in the past. What is it? Well, I like when a system fails fast, fails loudly and fails in the clear way. Unfortunately, Roslyn can do something completely different what sometimes makes working with it the pain in ass. I'll give you some examples.

Issue 1 - Problem with "empty" projects

Here is the code that shows how I usually process documents/files for a given project. It's pretty easy.
var workspace = MSBuildWorkspace.Create();

var sln = await workspace.OpenSolutionAsync(path);
     
foreach (var projectId in sln.ProjectIds)
{
   var project = sln.GetProject(projectId);

   foreach (var documentId in project.DocumentIds)
   {
      // Process a document
   }
}
It works quite well but only on my machine :) On 2 other machines I'm observing problems. In general I have an example solution with 2 test projects. One is WPF application and the another is WebAPI.

The problem is that on some machines I can only read and analyze WPF application. If I try to do exactly the same thing with WebAPI application, then the project loaded by Roslyn is empty i.e. contains no documents (DocumentIds property is empty)! I've already tried to load this project in a different way but without success.

To be honest currently I'm stuck and I have no idea what is wrong here. Any suggestions?

Issue 2 - the semantic analysis does not work

With Roslyn we can perfrom the syntax analysis and the semantic analysis of the code. The syntax analysis, with a syntaxt tree, allows you to only see a structure of a program. The semantic analysis is more powerfull and allows you to understand more. For example, having a code like that:

SomeClass x;

With the semantic analysis you can check that SomeClass is defined within SomeNamespace and has X members (methods, properties). For example, here we have a code showing how to use the semantic analysis to check what interfaces are implemented by a given class at any level of the inheritance.
var compilation = await project.GetCompilationAsync();

foreach (var documentId in project.DocumentIds)
{
   var document = project.GetDocument(documentId);

   // Get a syntax tree
   var tree = await document.GetSyntaxTreeAsync(); 

   // Get a root of the syntax tree
   var root = await tree.GetRootAsync(); 
   
   // Find a node of the syntaxt tree for a first class in a file/document
   var classNode= root.DescendantNodes().OfType<ClassDeclarationSyntax>().FirstOrDefault(); 

   if(classNode== null) continue;

   // Get a semantic model for the syntax tree
   var semanticModel = compilation.GetSemanticModel(tree); 

   // Use the semantic model to get symbol info for the found class node
   var symbol = semanticModel.GetDeclaredSymbol(classNode); 

   // Check what inerfaces are implemnted by the class at any level
   foreach(var @interface in symbol.AllInterfaces)  
   {
      // ...
   }
}
If you run this code as it is, it again will not throw any exceptions. However, you'll noticed that any found class doesn't implement any interface according to Roslyn. Where is the problem this time?

It's quite obvious if you know that. To perform the semantic analysis Roslyn needs to analyse assemblies used by the project. However, it's not enough to compile the project. You have to explicitly register all required assemblies. I do it in the easy way. I simply register all assemblies found in the output folder.
var compilation = await project.GetCompilationAsync();

// Let's register mscorlib
compilation = compilation.AddReferences(MetadataReference.CreateFromFile(typeof(object).Assembly.Location));

if (Directory.Exists("PATH TO OUTPUT DIRECTORY"))
{
   var files = Directory.GetFiles(directory, "*.dll").ToList(); // You can also look for *.exe files

   foreach (var f in files)
      compilation = compilation.AddReferences(MetadataReference.CreateFromFile(f));
}
And again if the semantic analysis can not be performed without that why no exception is thrown?

Issue 3 - Problem with reading projects/solutions

This one I've already described in more details in the post about Roslyn and unit tests. The problem was that:
  • MSBuildWorkspace.OpenSolutionAsync method was returning an empty solution if a particular assembly was missing (not fast, not loud)
  • MSBuildWorkspace.OpenProjectAsync method was returning the error The language 'C#' is not supported (not in the clear way).
These issues were caused by a missing assembly i.e. Microsoft.CodeAnalysis.CSharp.Workspaces.dll. However, wouldn't it be easier to just throw an exception saying that it is missing. Or at least saying that it was not possible to find assembly responsible for reading C# projects and solutions.


Remember failing fast, loudly and in the clear way does not cost much but can save a lot of time.


*The picture at the beginning of the post comes from own resources and shows cliffs near Cabo da Roca - the westernmost extent of mainland Portugal.

24/03/2017

Report from the battlefield #10 - fuck-up with AutoMapper

Home



Have you ever heard or used AutoMapper? What a question, of course you have. And in the very unlikely scenario that you haven't, it's the object to object mapper that allows you to map probably everything. In short no more manual, boring, tedious, error-prone mapping.

However, the great power comes with great responsibility. In the recent time, I had an occasion to fix 2 difficult to track bugs related to improper usage of AutoMapper. Both issues were related to the feature of AutoMapper which according to me is almost useless and at least should be disabled by default. Let's look at the following 2 classes and testing code:
public class SomeSourceClass
{
   public Guid Id { get; set; }
   public string IdAsString => Id.ToString();
   public string Value { get; set; }
}

public class SomeDestinationClass
{
   public Guid Id { get; set; }
   public string IdAsString => Id.ToString();
   public string Value { get; set; }
}

class Program
{ 
   static void Main()
   {
      Mapper.Initialize(config => config.CreateMap<SomeSourceClass,SomeDestinationClass>>());
      
      var src = new SomeSourceClass { Id = Guid.NewGuid(), Value = "Hello" };
      var dest = Mapper.Map<SomeDestinationClass>(src);

      Console.WriteLine($"Id = {dest.Id}");
      Console.WriteLine($"IdAsString = {dest.IdAsString}");
      Console.WriteLine($"Value = {dest.Value}");
   }
}
This works as a charm. If you run this example, you should see output like that:

Id = a2648b9e-60be-4fcc-9968-12a20448daf4
IdAsString = a2648b9e-60be-4fcc-9968-12a20448daf4
Value = Hello

Now, let's introduce interfaces that will be implemented by SomeSourceClass and SomeDestinationClass:
public interface ISomeSourceInterface
{
   Guid Id { get; set; }
   string IdAsString { get; }
   string Value { get; set; }
}

public interface ISomeDestinationInterface
{
   Guid Id { get; set; }
   string IdAsString { get; }
   string Value { get; set; }
}

public class SomeSourceClass: ISomeSourceInterface { /*... */}

public class SomeDestinationClass : ISomeDestinationInterface { /*... */}
We also want to support mappings from ISomeSourceInterface to ISomeDestinationInterface so we need to configure AutoMapper accordingly. Otherwise the mapper will throw an exception.
Mapper.Initialize(config =>
   {
      config.CreateMap<SomeSourceClass, SomeDestinationClass>();
      config.CreateMap<ISomeSourceInterface, ISomeDestinationInterface>();
   });

var src = new SomeSourceClass { Id = Guid.NewGuid(), Value = "Hello" };
var dest = Mapper.Map<ISomeDestinationInterface>(src);

Console.WriteLine($"Id = {dest.Id}");
Console.WriteLine($"IdAsString = {dest.IdAsString}");
Console.WriteLine($"Value = {dest.Value}");
If you run this code, it'll seemingly work as the charm. However, there is a BIG PROBLEM here. Let's examine more carefully what was written to the console. The result will look as follows:

Id = a2648b9e-60be-4fcc-9968-12a20448daf4
IdAsString =
Value = Hello

Do you see a problem? The readonly property IdAsString is empty. It seems crazy because IdAsString property only returns the value of Id property which is set. How is it possible?

And here we come the feature of AutoMapper which according to be should be disabled by default i.e. automatic proxy generation. When AutoMapper tries to map ISomeSourceInterface to ISomeDestinationInterface it doesn't know which implementation of ISomeDestinationInterface should be used. Well, actually no implementation may even exists, so it generates one. If we check the type of dest property we'll see something like:

Proxy<ConsoleApplication1.ISomeDestinationInterface_ConsoleApplication1_Version=1.0.0.0_Culture=neutral_PublicKeyToken=null>.

Initially this function may look as something extremely useful. But it's the Evil at least because of the following reasons:
  • As in the example, the mapping succeeds but the result object contains wrong data. Then this object may be used to create other objects... This can lead to really difficult to detect bugs.
  • If a destination interface defines some methods, a proxy will be generated, but the mapping will fail due to System.TypeLoadException.
  • It shouldn't be needed in the well written code. However, if you try to cast the result of the mapping to the class, then System.InvalidCastException exception will be thrown.
The ideal solution would be to disable this feature. However, I don't know how :( The workaround is to explicitly tell AutoMapper not to generate proxies. To do that we need to use As method and specify which concrete type should be created instead of a proxy.

The final configuration looks as follows. It's also worth mentioning that in this case we actually don't need to define mapping from SomeSourceClass to SomeDestinationClass. AutoMapper is clever enough to figure out that these classes implements interfaces.
Mapper.Initialize(
   config =>
   {
      config.CreateMap<ISomeSourceInterface, ISomeDestinationInterface>().As<SomeDestinationClass>();
   });


AutoMapper proxy generation feature is the Evil.


*The picture at the beginning of the post comes from own resources and shows Okonomiyaki that we ate in Hiroshima. One of the best food we've ever eaten.

15/03/2017

Report from the battlefield #9 - async/await + MARS

Home



This post from Report from the battlefield series will be about my own mistake. It is related to async/await and MARS i.e. Multiple Active Result Sets. async/await allows us to use asynchronous programming more easily. MARS is a feature of MSSQL that allows us to have more than one pending request opened per connection at the same time. For example, it may be useful if we have 2 nested loops i.e. internal and external. External loops iterate through one result set and the internal one through another. Ok, but you probably wonder what MARS has in common with async/await.

A few days ago my application started failing due to InvalidOperationException exception with the additional message saying that The connection does not support MultipleActiveResultSets. Well, I used MARS in the past so I simply enabled it in the connection string by setting MultipleActiveResultSets attribute to true.

However, later I realized that my application should not require MARS at all so I started digging into what was wrong. It turned out that the problem was related to my silly mistake in using async/await. Let's look at the following simplified version of the problematic code. We have a trivial Main method:
static void Main()
{
   Start().GetAwaiter().GetResult();
}
Start is an async method responsible for opening a connection to DB and executing other async methods:
private static async Task Start()
{
   using (var c = new SqlConnection(ConnectionString))
   {
      c.Open();

      await Func1(c);
      await Func2(c);
      await Func3(c);
   }
}
Func1, Func2 and Func3 are responsible for reading data and processing them. In our case, for simplification, they all will do the same thing:
private static async Task Func1(SqlConnection c) => await ReadData(c);
private static async Task Func2(SqlConnection c) => ReadData(c);
private static async Task Func3(SqlConnection c) => await ReadData(c);
And here is the ReadData method. It's also simple. It simply reads data from a table:
private static async Task ReadData(SqlConnection c)
{
   var cmd = c.CreateCommand();

   cmd.CommandText = "SELECT * FROM dbo.Fun";

   using (var reader = await cmd.ExecuteReaderAsync())
   {
      while (await reader.ReadAsync())
      {
         // Process data
      }
   }
}
If you run this code, the aforementioned InvalidOperationException exception will be thrown in the line with ExecuteReaderAsync. The question is why? Well, in this short code it is rather easy to spot that in Func2 method await is missing before ReadData. But, do you know why it is a problem? If not, don't worry it's a little bit tricky.

Here is an explanation. Without await the simplified flow is as follows:
  • ...
  • Start executes Func2.
  • Func2 executes ReadData.
  • ReadData executes ExecuteReaderAsync.
  • ReadData awaits for the result from ExecuteReaderAsync.
  • The control returns to caller i.e. Func2.
  • However, Func2 does not use await so it returns completed task to Start method.
  • From the point of view of Start processing of Func2 is finished so it executes Func3.
  • Func3 executes ReadData
  • The previous call to ReadData may be still in progress.
  • It also means that ReadData will call ExecuteReaderAsync when another result set is still being processed.
  • The exception is thrown.
Adding missing await fix the problem. Thanks to that the task returned from Func2 will not be completed until call to ReadData is over. And if so Start will not execute Func3 immediately. The final well known conclusion is:

Always async/await all the way down.


*The picture at the beginning of the post comes from own resources and shows Laurel forest on La Gomera.

12/12/2016

Did you know that about HTTP?

Home


Titile: Chapel of the Emerald Buddha in Bangkok, Source: own resources, Authors: Agnieszka and Michał Komorowscy

Recently, when answering a question on stackoverflow.com, I've learned an interesting thing about HTTP protocol. Actually currently it seems to be obvious to me ;) What I'm talking about? Well, initially I thought that if you send GET HTTP request in order to download a specific URL, then in a response you will get an entire page/file. In other words I thought that it's not possible to read the specific part of a page/file. However, it turned out that it's quite easy.

22/11/2016

How to validate dynamic UI with JQuery?

Home


Source: own resources, Authors: Agnieszka and Michał Komorowscy

One of the most interesting task I developed some time ago was a library responsible for the generation of dynamic UI based on XML description in ASP.NET MVC application. The task was not trivial. The UI had to change based on the selections made by a user. I had to support many different types of controls, relations between them e.g. if we select the checkbox A then the text box B should be disabled and of course validations. In order to perform the client side validations I used jQuery Unobtrusive Validation library. I thought that it'll work just like that but it turned out that a dynamic UI may cause problems. Here is what I did.

16/11/2016

3 reasons why I don't use strict mocks

Home


Source: own resources, Authors: Agnieszka and Michał Komorowscy

The majority, if not all, of mocking frameworks provides 2 types of mocks i.e. strict & loose. The difference between them is that the strict mocks will throw an exception if an unexpected (not configured /set up) method was called. I prefer to use loose mocks because with strict ones unit tests are fragile. Even the small change in the code can cause that unit tests will start failing. Secondly, if you need to set up many methods a test becomes less readable. Now, I can see one more reason.

31/10/2016

Roslyn - How to create a custom debuggable scripting language 2?

Home


A screenshot comes from Visual Studio 2015

In the previous post I explained how to create a simple debuggable scripting language based on Roslyn compiler as a service. By debuggable I mean that it can be debugged in Visual Studio as any "normal" program for example written in C#.

27/10/2016

Roslyn - How to create a custom debuggable scripting language?

Home


A screenshot comes from Visual Studio 2015

Sometime ago I decided to play a little bit with Cakebuild. It's a build automation tool/system that allows you to write build scripts using C# domain specific language. What's more it is possible to debug these scripts in Visual Studio. It is interesting because Cake scripts are neither "normal" C# files nor are added to projects (csproj). I was curious how it was achieved and it is result of my analysis. I'll tell you how to create a simple debuggable scripting language. By debuggable I mean that it'll be possible to debug scripts in our language in Visual Studio almost as any "normal" program in C#. Cakebuild uses Roslyn i.e. a compiler as a service from Microsft and we'll do the same.

31/08/2016

AjaxExtensions.BeginForm doesn't work. Really?

Home


Source: own resources, Authors: Agnieszka and Michał Komorowscy

The goal of using Ajax is to communicate with the server asynchronously without reloading the entire page. Specifically AjaxExtensions.BeginForm can be used to updated a selected part of a web page. It is relatively easy in use but can be also troublesome. Especially, when we try to apply it in an application which wasn't using Ajax earlier. I decided to wrote this short technical post because recently I came across the following issue the few times:

AjaxExtensions.BeginForm redirects a user to a new page instead of refreshing a fragment of a current one.

This problem has an easy explanation. Under the hood AjaxExtensions.BeginForm uses Java Script library called Microsoft jQuery Unobtrusive Ajax. The issue is that this library is not installed by default if we create a new project. It's easy to forget about it.

If you have the described problem:
  • Check in packages.config file contains Microsoft.jQuery.Unobtrusive.Ajax package.
  • Check if jquery.unobtrusive-ajax.js file is referenced in html e.g.: <script src="/scripts/jquery.unobtrusive-ajax.js"></script>
  • If you use bundles checik if jquery.unobtrusive-ajax.js was included in a bundle e.g.:
    public static void RegisterBundles(BundleCollection bundles)
    {
       ...
       var js = new ScriptBundle("~/bundles/MyBundle").Include("~/Scripts/jquery.unobtrusive-ajax.js");
       ...
    }
  • Besides, check if a bundle with jquery.unobtrusive-ajax.js is rendered properly e.g.:
    @Scripts.Render("~/bundles/MyBundle")

15/11/2015

Interview Questions for Programmers by MK #6

Home

Question #6
What is the arithmetic overflow and how is it handled in .NET?

Answer #6
It is a situation when the result of an arithmetic operation exceeds (is outside of) the range of a given numeric type. For example the maximum value for byte type in .NET is 255. So in the following example, an operation a+b will cause an overflow:
byte a = 255;
byte b = 20;
byte c = a + b;
The final result depends on the used numeric types:
  • For integer types either OverflowException will be thrown or the result will be trimmed/cropped (the default behaviour). It depends on the compiler configuration and usage of checked / unchecked keywords.
  • For floating point types OverflowException will never be thrown. Instead the overflow will lead either to the positive or the negative infinity.
  • For decimal type OverflowException will be always thrown.
var b = byte.MaxValue;
//The result will be zero because:
//b = 255 = 1111 1111 
//b++ = 256 = 1 0000 0000
//The result has 9 bits so the result will be trimmed to 8 bits what gives 0000 0000
b++; 
         
checked
{
 b = byte.MaxValue;
 //Exception will be thrown 
 b++; 
}

var f = float.MaxValue;
//The result will be float.PositiveInfinity
f *= 2;  

decimal d = decimal.MaxValue;
//Exception will be thrown
d++; 

27/07/2015

A hint how to use TaskCompletionSource<T>

Home

Some time ago I wrote about using TaskCompletionSource<T> class in order to take advantage of async/await keywords. In that post I included the following code:
public async Task<Stream> ProcessFileAsync(string key, string secret, string path)
{
   var client = new DropNetClient(key, secret);
   //...
   var tcs = new TaskCompletionSource<Stream>();
   client.GetFileAsync(path, response => tcs.SetResult(new MemoryStream(response.RawBytes)), tcs.SetException);
   return tcs.Task;
}
Now, Let's assume that we want to provide a possibility to cancel a task returned from ProcessFileAsync method. We can do something like that:
public async Task<Stream> ProcessFileAsync(string key, string secret, string path, CancellationToken ct)
{
   var client = new DropNetClient(key, secret);
   //...
   var tcs = new TaskCompletionSource<Stream>();

   ct.Value.Register(tcs.SetCanceled);

   client.GetFileAsync(path, response => tcs.SetResult(new MemoryStream(response.RawBytes)), tcs.SetException);
   return tcs.Task;
}
I used CancellationToken.Register method in order to register a callback that will be executed when a token is canceled. This callback is responsible for notifying TaskCompletionSource<T> that underlying task should be cancelled.

You may say that it is not enough because this code doesn't inform DropNetClient that an action should be cancelled. You are right. However, according to my knowledge DropNet API doesn't provide such a possibility.

It leads to the situation when a task is cancelled but DropNetClient continues processing and finnaly TaskCompletionSource.SetResult method will be executed. This will cause ObjectDisposedException because this method cannot be executed for a disposed task. What can we do in this case?

The first solution is to check if a task is cancelled before calling SetResult method. However, it can still happen that a task will be cancelled after this check but before calling SetResult method.

My proposition is to use methods from TaskCompletionSource.Try* family. They don't throw exceptions for disposed tasks.
public async Task<Stream> ProcessFileAsync(string key, string secret, string path, CancellationToken ct)
{
   var client = new DropNetClient(key, secret);
   //...
   var tcs = new TaskCompletionSource<Stream>();

   ct.Value.Register(tcs.SetCanceled);

   client.GetFileAsync(path, response => tcs.TrySetResult(new MemoryStream(response.RawBytes)), tcs.TrySetException);
   return tcs.Task;
}
I'm aware that it is not a perfect solution because it actually does not cancel processing. However, without modifying DropNet code it is not possible. It the case of my application it is an acceptable solution but it is not a rule.

16/07/2015

Interview Questions for Programmers by MK #5

Home

Question #5
Here you have a very simple implementation of Template method pattern.
public abstract class BaseAlgorithm
{
   protected SomeObject Resource { get; set; }
   //Other resources

   public void Start()
   {
      // Configure
      Resource = new SomeObject();
      //...
      try
      {
         InnerStart();
      }
      finally
      {
         // Clean up
         Resource.Dispose();
         Resource= null;               
         //...
      }
   }

   protected abstract void InnerStart();
}

public class Algorithm1: BaseAlgorithm
{
   protected override void InnerStart()
   {
      //Do something with allocated resources
   }  
}
At some point someone decided to create a new class Algorithm2 derived from BaseAlgorithm. The difference between the new class and the previous one is that Algorithm2 starts an asynchronous operation. A programmer decided to use async/await keywords to handle this scenario. What do you think about this approach? What could possibly go wrong?
public class Algorithm2: BaseAlgorithm
{
   protected async override void InnerStart()
   {
      var task = DoAsyncCalculations();
      await task;

      //Do something with allocated resources
   }

   private Task DoAsyncCalculations()
   {
      //Let's simulate asynchronous operation
      return Task.Factory.StartNew(() => Thread.Sleep(1000));
   }
}
Answer #5
I think that the developer who created Algorithm2 doesn't understand well how async/await keywords work. The main problem is that finally block inside Start method will be executed before DoAsyncCalculations method will end calculations. In other words resources will be disposed in the middle of calculations and this will cause an exception. Sequence of events will be as follows:
  • Start method begins.
  • SomeObject is created.
  • InnerStart method begins.
  • InnerStart method starts an asynchronous operation and uses await to suspend its progress.
  • This causes that control returns to Start method.
  • Start method cleanups resources.
  • When the asynchronous operation is finished InnerStart method continues processing. It tries to use resources, that have been already disposed, what leads to an exception.
It is also not recommended to have async void methods (except event handlers). If an async method doesn't return a task it cannot be awaited. It is also easier to handle exceptions if an async method returns a task. For details see also this article.

To fix a problem BaseAlgorithm must be aware of asynchronous nature of calculations. For example InnerStart method can return a task which will be awaited inside try block. However, it also means that synchronous version of InnerStart method in Algorithm1 will have to be changed. It may not be acceptable. Generally, providing asynchronous wrappers for synchronous methods is debatable and should be carefully considered.

In this case, I'll consider to have separated implementations of Template method pattern for synchronous and asynchronous algorithms.

12/07/2015

Interview Questions for Programmers by MK #4

Home

Question #4
You have to implement a very fast, time critical, network communication between nodes of a distributed system. You decided to use good old sockets. However, you haven't decided yet whether to use TCP or UDP protocol. Which one would you choose and why?

Answer #4
If a speed is the only one important factor, I'd choose UDP. UDP is faster than TCP because it has a smaller overhead. In comparison to TCP it is a connection-less, unreliable protocol that doesn't provide features like retransmission, acknowledgment or ordering of messages.

However, it also means that usage of UDP might be more difficult and will require additional coding in some cases. For example, if we have to assure that sent messages have been delivered. In this case, I'd certainly use TCP.

Finally, there is one more thing in favor of UDP. It provides broadcasting and multicasting. So, if it is required I'd also use UDP instead of TCP.

06/07/2015

A practical example of using TaskCompletionSource<T>

Home

Recently I've found a question about real life scenarios for using rather unknown TaskCompletionSource<T> class. I started thinking where I would use it and very quickly I found a good practical example.

I have a pet project LanguageTrainer that helps me in learning words in foreign languages. Some time ago I added Dropbox support to it. It allows me to export/import list of words to/from Dropbox. I developed it in synchronous way. Now I prefer an asynchronous approach and I want to take advantages of async/await keywords.

The problem is that DropNet library, that makes communication with Dropbox easy, doesn't use async/await. It has asynchronous API but it is callback based. The really easy solution here is to use TaskCompletionSource<T>. Here is an example (simplified). Let's start with the original code that downloads a given file from Dropbox.
public void ProcessFile(string key, string secret, string path)
{
   var client = new DropNetClient(key, secret);
   // ...
   var bytes = client.GetFile(path)
   //Process bytes
}
The version that uses DropNet asynchronous API looks in the following way:
public void ProcessFileAsync(string key, string secret, string path)
{
   var client = new DropNetClient(key, secret);
   //...
   client.GetFileAsync(path, 
      response => 
      {
         var bytes = response.RawBytes;
         //Process bytes
      }, 
      ex => 
      {
         //Handle exception
      });
}
And finally the asynchronous version with async/await looks in the following way:
public async Task<Stream> ProcessFileAsync(string key, string secret, string path)
{
   var client = new DropNetClient(key, secret);
   //...
   var tcs = new TaskCompletionSource<Stream>();
   client.GetFileAsync(path, response => tcs.SetResult(new MemoryStream(response.RawBytes)), tcs.SetException);
   return tcs.Task;
}
...
var bytes = await ProcessFileAsync(key, secret, path);
//Process bytes
The method ProcessFileAsync is marked as async and returns a task so it can be awaited. Easy. isn't it? A few lines of code and you can use async/await with other types of asynchronous APIs.

15/06/2015

Interview Questions for Programmers by MK #3

Home

Question #3
You found the following code and were asked to refactor it if needed:
var sb = new StringBuilder();
sb.AppendLine("<root>")
sb.AppendLine(String.Format("   <node id="{0}"/>", 1));
sb.AppendLine(String.Format("   <node id="{0}"/>", 2));
sb.AppendLine(String.Format("   <node id="{0}"/>", 3));
//Many, many lines of code
sb.AppendLine("</root>");
What would you do and why?

Answer #3
It is not the best idea to create XML documents using string concatenation because it is error prone. Besides created documents are not validated in any way. In .NET we have a few possibilities to refactor this code.

I recommend to use XmlWriter in this case because we want to create a new document and we do not want to edit an existing one. However, if we also want to modify existing XML documents, the good choice will be XDocument or XmlDocument class.

In the case of small XML documents (when performance is not critical) it might be a good idea to use XDocument or XmlDocument even if we don't want to edit existing documents. Especially XDocument can be simpler in use than XmlWriter.

Comments #3
I remember that when I wanted to create an XML document in C# for the first time I did it by using string concatenation. The code worked ok and I was surprised that it didn't pass a review :)

27/04/2015

Interview Questions for Programmers by MK #1

Home

Do you know series of posts titled Interview Question of the Week on a SQL Authority blog? If not or if you don't know this blog at all you have to catch up. I learned a lot of from this series so I decided to start publishing something similar but to focus more on .NET and programming.

This is a first post from series which I called Interview Questions for Programmers by MK and in which I'm going to publish questions that I'd ask if I were a recruiter. Of course they are somehow based on my experience as a participant of many interviews.

Question #1
What is a meaning of using statement in the code below? What would you do if using keyword did not exist?
using(var file = File.OpenWrite(path))
{
   //...
}
Answer #1
In this example using statement is used to properly release resources (to call Dispose method) that are owned by an object of a class that implements IDisposable interface. It is a syntactic sugar and could be replaced by using try/finally block in the following way:
var file = File.OpenWrite(path);
try
{
   //...
}
finally
{
   if(file != null)
      file.Dispose();
}