Showing posts with label good practises. Show all posts
Showing posts with label good practises. Show all posts

18/05/2017

Report from the battlefield #11 - premature optimization is the root of all evil?

Home



Have you ever heard that "premature optimization is the root of all evil"? Probably yes. It's quite well known Donald Knuth's phrase. However, the whole cite is much less known:

"We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%."

Why Am I writing about that? Because recently I had on occasion to fix an application which was written according to the fist part of this cite. Or even worse it was written according to the rule "any optimization is the root of all evil". Here are some examples what not to do and some tips what to do.

27/02/2017

Report from the battlefield #8 - always remember about the context

Home


Title: Sunrise seen from the top of Mount Fuji, , Source: own resources, Authors: Agnieszka and Michał Komorowscy

I decided to change a little bit a character of Report from the battlefield series. Initially, in this series, I was describing my observations from my work as a reviewer for a recruitment company. Now, I'll be also writing about my findings from my day-to-day work. To start, I'll give you a tip how to log useful information.

I worked with an application that is responsible for monitoring folders. If it detects any new files, they are processed and copied somewhere else. The application logs information like the number of files to be processes, the file that is currently being processed etc. This information are logged with severity Information or Debug. It happens that a given file cannot be copied for example because the file with the same name already exists in the destination directory. In that case .NET throws System.IO.IOException. This exception is caught and logged with the severity Error. The simplified version of the log could look in this way:

INFO - 10 files have been found
INFO - Processing file started
...
INFO - Processing file ended
INFO - Processing file started
ERROR - An error occurred while processing a file: Cannot create a file when that file already exists.
...
INFO - Processing file ended
...

It's good that many important information are logged. However, there is a major issue with this log. The lack of the context! For example we know that some files have been processed but we don't know which exactly. This log should look as follows (I used red color to mark changes):

INFO - 10 files have been found in the directory 'C:\Input'
INFO - Processing file 'C:\Input\a.txt' started
...
INFO - Processing file 'C:\Input\a.txt' ended
INFO - Processing file 'C:\Input\b.txt' started
ERROR - An error occurred while processing a file: Cannot create a file when that file already exists.
...
INFO - Processing file 'C:\Input\b.txt' ended
...

It looks much better. Based on the log we can figure out which directory was monitored, which files have been processed successfully and which not. However, it's not everything. There is one more subtle problem here. What if messages with the severity Information or lower won't be logged (for example because of performance issues) and an error will be reported. In this case we'll get the following log:

...
ERROR - An error occurred while processing a file: Cannot create a file when that file already exists.
...

It's better than nothing but again we don't know processing of which file has actually failed. The expected result is:

...
ERROR - An error occurred while processing a file 'C:\Input\b.txt': Cannot create a file when that file already exists.
...

To sum up, always remember about the context when logging.

23/01/2017

C++ for C# developers - code like in Google

Home


Title: Elephant Retirement Camp in the vicinity of Chiang Mai, Source: own resources, Authors: Agnieszka and Michał Komorowscy

In the post Nuget in C++ rulez I wrote that I returned to programming in C++. It is like a new world for me but it's better and better. I'm even reminding myself things that I learned many years ago so it's not bad with me ;) Recently, I've discovered a C++ alternative for .NET StyleCop. StyleCop is a tool that analyses C# code in order to check if it is consistent with given rules and good practices. What is obvious there is a similar thing for C++ I'm talking about a tool called CppLint that was created by Google. It's written in Python and is fairly easy in use. However, please note that CodeLint requires the old Python 2.7. I tried and it won't work with Python 3.5.

When I run CppLint on my code it turned out that my habits from C# don't fit to C++ world according to Google. Here is an example of Hello Word written in C++ but in C# style.
#include <iostream>

namespace sample 
{
    class HelloWorld
    {
        public:
            void Fun()
            {
                std::cout << "Hello World Everyone!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!" << std::endl;
            }
    };
}

int main()
{
  sample::HelloWorld hw = sample::HelloWorld();
  hw.Fun();
  
  return 0;
}
If we verify this code, we will get the following errors:
a.cpp:0:  No copyright message found.  You should have a line: "Copyright [year] <Copyright Owner>"  [legal/copyright] [5]
a.cpp:3:  Line ends in whitespace.  Consider deleting these extra spaces.  [whitespace/end_of_line] [4]
a.cpp:4:  { should almost always be at the end of the previous line  [whitespace/braces] [4]
a.cpp:5:  Do not indent within a namespace  [runtime/indentation_namespace] [4]
a.cpp:6:  { should almost always be at the end of the previous line  [whitespace/braces] [4]
a.cpp:7:  public: should be indented +1 space inside class HelloWorld  [whitespace/indent] [3]
a.cpp:9:  { should almost always be at the end of the previous line  [whitespace/braces] [4]
a.cpp:10:  Lines should be <= 80 characters long  [whitespace/line_length] [2]
a.cpp:13:  Namespace should be terminated with "// namespace sample"  [readability/namespace] [5]
a.cpp:16:  { should almost always be at the end of the previous line  [whitespace/braces] [4]
a.cpp:19:  Line ends in whitespace.  Consider deleting these extra spaces.  [whitespace/end_of_line] [4]
a.cpp:21:  Could not find a newline character at the end of the file.  [whitespace/ending_newline] [5]
At the beginning of each line we have the line number where an error was detected. The number in square brackets at the end of each line informs you how confident CppLint is about each error i.e. 1 - it may be a false positive, 5 - extremely confident. In order to fix all these problems I did the following things:
  • Added Copyright 2016 Michał Komorowski.
  • Removed whitespaces at the end of lines.
  • Added a new line at the end of file.
  • Added a comment // namespace sample
  • Move curly braces. This one I don't like the most.
  • Break a too long line. It's also a little bit strange to me. 80 doesn't seem to be a lot. However, shorter lines makes working with multiple windows easier (see also this answer).
Then I run a tool again and I got some new errors:
a.cpp:6:  Do not indent within a namespace  [runtime/indentation_namespace] [4]
a.cpp:7:  public: should be indented +1 space inside class HelloWorld  [whitespace/indent] [3]
I also fixed them and the final version of Hello Worlds compliant with Google rules looks as follows: Here is the correct version:
// Copyright 2016 Michal Komorowski

#include <iostream>

namespace sample {
class HelloWorld {
 public:
  void Fun() {
    std::cout
      << "Hello World Everyone!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"
      << std::endl;
  }
};
}  // namespace sample

int main() {
  sample::HelloWorld hw = sample::HelloWorld();
  hw.Fun();
  return 0;
}

It's worth adding that CppLint has many configuration options for example you can disable some rules if you don't agree with them or change the maximum allowed length of a line (default is 80). Options can be also read from the configuration file CPPLINT.cfg.

16/01/2017

When Excel is better than machine learning?

Home


Title: Ruins of a castle in southern Poland, Source: own resources, Authors: Agnieszka and Michał Komorowscy

I can bet that some of you think that I'm crazy because I'm saying such blasphemies! Surely everyone knows that Excel is not for real developers ;) If you think so, I'll tell you a short story.

24/07/2016

Report from the battlefield #5 - Logging can kill performance

Home

Public Domain, https://commons.wikimedia.org/w/index.php?curid=48390
Source: own resources, Authors: Agnieszka and Michał Komorowscy

So far in Report from the battlefield series I wrote about my experiences as an expert in the recruitment company. This time I'll write a bug that I found in the production. It was all about the performance. The problem was that in the new version of an application one operation slowed down about 6 times. Initially, I suspected that amount of data simply increased considerably or some network problems. Fortunately, I easily reproduced the issue on my dev machine. Reproducing a problem is half the battle. Though performance problem are usually difficult to analyse so I was ready for a long investigation.

I started stepping through the code with a debugger just to see what is going on. Everything seemed to be ok until... One of the final operations was to log into a file what was retrieved from a database. What's important the log level was set to Trace so even large amount of data shouldn't matter in the production. Why? Because in the production, precisely because of the performance reasons, the logger should be configured not to log everything to a file. In other words it should ignore messages usually with the log level = Trace or Debug. However, after I had pressed F10 (Step Over), I had to wait a few seconds till the logging ends. BINGO!

My first though was that someone configured the logger in the wrong way in the production. Typical PEBKAC problem. To verify my hypothesis I changed the configuration of the logger and executed the problematic operation. Unfortunately, the problem occurred again. Another look at the code and I know what was wrong. And do you already know?

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

The problem was that for large amount of data the application required a few seconds just to create/prepare a message for the logger. To make thing worse this message was created, regardless if it was later used by the logger or not. During development it may acceptable but not in the production! There are 2 potential solutions of this problem. Details depends on the logging framework:
  • The first approach is to simply check the logging level before creating/preparing a message e.g.:
    if(Logger.LogLevel == LogLevel.Trace) 
    {
        /* Prepare and log a message */
    }
    
  • The second approach is to use deferred execution for example lambdas e.g.:
    Logger.Trace(() => /* Prepare a message */).
    If a logger supports this syntax, a lambda will be executed if and only if it is required.

14/03/2016

Report from the battlefield #4 - Do not waste my and your time

Home

The Report from the battlefield series is based on my experience as a reviewer. The idea is simple. In order to evaluate programming skills, a candidate is asked to write a simple project. To do so he/she needs to invest some amount of time (roughly speaking a few hours). Taking this into account I assume that he/she must be interested in finding a new job. Otherwise he/she wouldn't spend his/her private time writing a project which rather isn't extremely exciting. The more I'm surprised why some people doesn't care about the first impression.

Here are some examples showing what I'm talking about:
  • A connection string used by the application referred to some server that of course wasn't available to me.
  • The database used by the application didn't contain any sample data.
  • I had to manually create a database used by the application. There was a script but it didn't work without fixes.
  • The application crashed immediately when started.
  • ...
It's wasting of time from my perspective. It is true that all these problems can be fixed quickly but they require additional effort from me. You can believe me that it is extremely annoying. Instead of making an actual review someone forces me to fix bugs. What's the worst these bugs could be avoided easily with a little bit more effort.

Please remember, the first impression is important. It'll be appreciated if a reviewer can run your application just by pressing F5 in Visual Studio (or in another IDE). You can test it in a straightforward way. Before submitting a project to a review, copy it to another machine and try to run it there. It should work without any additional actions.

Currently, if a project cannot be run without problems I don't make a review. However, I have a soft heart and I give a candidate one chance to fix them. Do you think that it's a good approach? I have my doubts because an employer probably wouldn't do so.

07/02/2016

Report from the battlefield #3 - IEnumerable vs IQueryable

Home

Sometime ago I was reviewing the data access layer that was based on Entity Framework. I found a code which immediately attracted my attention. The simplified version is shown below.
public IList<Product> GetAll()
{
   return ctx.Products.Select(p => new Product() { ... }).ToList(); 
}
...
var numberOfProducts = GetAll().Count();
GetAll method is pretty simple because it just reads products from a database. The result returned from this method is used to count number of products in the database. Although it is simple it contains a serious bug. The problem is that it uses ToList method to return a list of products. It causes that ALL products must be retrieved from the database in order to return them in the form of a list. In other words there is no deferred execution here.

If we work with a local database and the number of products is small it shouldn't be a problem. However, this kind of code might lead to difficult to analyse performance problems. For example if our application uses a remote database and/or there are thousands of products. The desired behaviour is that products are counted by a database engine. So let's try to make a fix:
public IEnumerable<Product> GetAll()
{
   return ctx.Products.Select(p => new Product() { ... });
}
...
var numberOfProducts = GetAll().Count();
Now it looks much more better, doesn't it? GetAll doesn't use ToList and returns IEnumerable. interface. Unfortunately this solution is far from being perfect. In comparison to the first version, the only difference is the moment when all products are retrieved from the database. This time it will happen when Count method is executed. Why? Before I'll explain let's see the correct solution:
public IQueryable<Product> GetAll()
{
   return ctx.Products.Select(p => new Product() { ... });
}
...
var numberOfProducts = GetAll().Count();
This time I used IQueryable instead of IEnumerable. This small change is crucial. It causes that no products are read from a database. Entity Framework "sees" that we only wanto to count number of products and an appropriate query is sent to a database. In other words LINQ To Entities is used.

The situation is completely different when we work with IEnumerable. In order to understand a difference we have to realise one thing. Count method for IEnumerable is something different than Count method for IQueryable. With IEnumerable we use LINQ To Objects and LINQ To Objects operates on objects in memory, it cannot communicate with a database. It is why all products must be read from a database if we want to count them.

Now someone inquiring can say that for virtual methods it shouldn't matter if we have variables of type IEnumerable or of type IQueryable if these variables points the same objects. After all C# is an object oriented language that supports polymorphism etc. Well, it is all true but only for virtual methods and Count is not a virtual method. It is an extension method and extension methods don't support polymorphism.

29/12/2015

Report from the battlefield #2 - amount of data matters a lot

Home

In the next post from the Report from the battlefield series I'll wrote about a serious mistake that is quite common according to my experience. I'm thinking about a situation when a developer assumes that all data from a database can be processed on the client side. I'll give you 2 examples that I encountered during my reviews.

Case 1

A developer was asked to implement the paging functionality. He created a single page Web application. It looked nice and the paging was working correctly at first glance. I decided to check how it was implemented under the hood. I examined a web service that was used by the application and I was surprised. Why? I didn't find a web method responsible for returning pages. The next step was to dig into a java script code. Unfortunately, I discovered that the paging was implemented only on the client side i.e. the application initially downloaded all data from a database (via Web Service).

Case 2

In the another project the paging was implemented correctly on the server side but a developer made a more subtle mistake. An application had a shopping cart function. Of course it was possible to add and remove products to and from a cart. To do so a web service used by the application had a method GetCart. This method was responsible for retrieving a current content of a cart from a database.

However, it was strange that this method returned only identifiers of products. What's more there was no GetProductDetails web method. It made me curious how the application displays products details to users only knowing its identifiers. It turned out that at the initiation the application was reading details of all products from a database. Having all products on the client side it was easy to find details of a product based on its identifier.

Summary

In both cases applications were fast enough because of a small amount of data. In the case of real-life databases they will not. I think that we should always be prepared for the worst case. Especially, when we participate in a recruitment and we want to show ourselves from the best side. An evaluator shouldn't guess whether we know something or not.

24/11/2015

Report from the battlefield #1 - EF and DTOs

Home

Some time ago, I started doing code reviews of various projects for the recruitment company. It is an interesting experience and I'm learning a lot by this occasion. I also observed that some mistakes are repeated by different authors. Other are not so common but are not obvious. So I came up with the idea to start a new series of posts under the title "Report from the battlefield". In this series I'll describe my observations and findings from my reviews.

Let's start. Recently, I reviewed a project created with AngularJS + ASP.NET Web API + Entity Framework. The code was neither very good nor very bad. However, I noticed that the author decided to use a class generated from the EDMX model as DTO (Data Transfer Object). The reasoning behind this decision was simple - this class had all properties required on the client side so why not to use it. Well there are a few reasons why it is not a good idea.
  • With dedicated DTOs it is less possible that changes on the server side will affect the client side.
  • With dedicated DTOs we can easily control what will be send to the client side and in what format.
  • With dedicated DTOs the server side model can be completely different from the client side model.
  • By exposing EF classes to the client side we effectively expose the database model to the client side!
You may agree with my points or not. So, I'll give you a practical example what could happen if we use EF classes as DTOs. Let's assume that there is EDMX model with 3 types of entities:
  • Customer with Orders navigation property.
  • Orders with Customer and Products navigation properties.
  • Products with Orders navigation property.
Now we want to read only 1 customer from a database, serialize it to JSON and send the result to the client side. What could go wrong? Well, because of the navigation properties the JSON serializer that is used by ASP.NET Web API will read from the database and convert to JSON the whole graph of customers, orders and products! To be more specific, I saw 0.5 MB response which should have a few kilobytes for a very small database (it contained small dozens of records in all tables)! I can bet that in the case of a production database a response would have hundreds of megabytes.

22/10/2015

TransactionScope and multi-threading

Home

It's my third post about TransactionScope. This time I'll write about using it with multi-threading. Let's start with the following code:
using (var t = new TransactionScope())
{
   var t1 = Task.Factory.StartNew(UpdateDatabase);
   var t2 = Task.Factory.StartNew(UpdateDatabase);
   Task.WaitAll(t1, t2);
   t.Complete();
}

private static void UpdateDatabase()
{
   using (var c = new SqlConnection(connectionString))
   {
      c.Open();

      WriteDebugInfo();

      new SqlCommand(updateCommand, c).ExecuteNonQuery();
   }
}

private static void WriteDebugInfo()
{
   Console.WriteLine("Thread= {0}, LocalIdentifier = {1}, DistributedIdentifier = {2}",
      Thread.CurrentThread.ManagedThreadId,
      Transaction.Current?.TransactionInformation.LocalIdentifier,
      Transaction.Current?.TransactionInformation.DistributedIdentifier);
}
It seems simple but it doesn't work. The problem is that a connection that is created in UpdateDatabase method will not participate in any transaction. We can also observe that WriteDebugInfo will write empty transaction identifiers to the console. It happens because in order to read an ambient transaction (the transaction the code is executed in) TransactionScope uses Transaction.Current property which is thread static (i.e. specific for a thread).

To overcome this issue we have two possibilities. The first one is to use DependentTransacion. However, I'll not show how to do it because since .NET 4.5.1 there is a better way - TransactionScopeAsyncFlowOption enum. Let's try.
using (var t = new TransactionScope(TransactionScopeAsyncFlowOption.Enabled))
{
   ...
}
Unfortunately, there is a big chance that this time we will get TransactionException with the message The operation is not valid for the state of the transaction. in the line with ExecuteNonQuery. The simplified stack trace is:

at System.Transactions.TransactionStatePSPEOperation.get_Status(InternalTransaction tx)
at System.Transactions.TransactionInformation.get_Status()
...
at System.Data.SqlClient.SqlCommand.ExecuteNonQuery()
at Sandbox.Program.UpdateDatabase(Object o)

I read a lot about this but nobody was able to explain why it happens. I also looked into the source code of TransactionStatePSPEOperation class. It was instructive because I learned what is PSPE - Promotable Single Phase Enlistment. However, it also didn't give me an exact answer.

So, I played a little bit with the code and I noticed that the problem occurs when:
  • One thread tries to run ExecuteNonQuery.
  • Another thread waits for the opening of the connection.
However, when both connections were already opened then the exception wasn't thrown.

At this point it is worth reminding one thing - when there are 2 or more connections opened in a transaction scope at the same time then a transaction is promoted to a distributed one. I'm not 100% sure but I think that the problem occurs because it is not allowed to use a connection which participates in a transaction which is in the middle of promotion to the distributed one. So, the solution is to assure that a transaction will be distributed from the beginning. Here is a fixed code with a magic line (I found it here):

using (var t = new TransactionScope())
{
   //The magic line that makes a transaction distributed
   TransactionInterop.GetTransmitterPropagationToken(Transaction.Current);

   var t1 = Task.Factory.StartNew(UpdateDatabase);
   var t2 = Task.Factory.StartNew(UpdateDatabase);
   Task.WaitAll(t1, t2);
   t.Complete();
}
Nonetheless, the more I think about this the more convinced I'm that using TransactionScope with multi-threading is asking for problems.

13/10/2015

How not to use TransactionScope. Another WTF!

Home

This time I will write again about TransactionScope. It is a very useful class and seems to be extremely easy in use. In majority of cases it is true. However, there are also some pitfalls lurking for developers. Especially for these who don't like to waste time for reading MSDN documentation if not really needed i.e. probably vast majority of us ;)

Some time ago, I was analysing more or less the following code:
using(var t = new TransactionScope())
{
   var c = ConnectionProvider.ProvideConnection();
   //Use a connection to update a database
   //...
   t.Complete();
}
ConnectionProvider is a class that hides details of managing connections to a database. There was also a bug in the code responsible for updating a database which caused exceptions. I fixed it and I run tests again. This time an exception was not thrown but something was wrong because a database contained unexpected data. It looked like the transaction was not rollbacked!

Firstly, I though that it is some kind of magic. However, as usual in this kind of cases it wasn't. I digged into ConnectionProvider and I found out that this class was performing some kind of pooling and a connection wasn't opening every time. It was a big problem because connections opened outside a transaction scope do not participate in a transaction. The solution of this problem is to explicitly enlist a connection in an existing transaction scope with the EnlistTransaction method.

It is also worth highlighting that the described problem won't occur if ConnectionProvider doesn't try to implement polling on its own. In general we don't have to do it because .NET do it for us. The problem will also not occur if using statement is used to close a connection returned by a provider.

09/09/2015

TransactionScope + Ninject + a small mistake = WTF

Home

Sometimes one stupid mistake can cost a lot of time. A few days ago my application (AngularJS + ASP.NET Web API) started crashing because of the following error:

MSDTC on server 'XXX' is unavailable

It was strange. I wasn't aware of any distributed transactions in my application. To be honest, I was using TransacionScope but I was sure that there was no reason to promote a lightweight transaction into a distributed one. To make things more strange the error wasn't reported every time. When I tried to update data for the first time everything was ok. However, the second attempt (and every next) was failing.

It took me some time to examine all recent changes but finally I found a problem. It was quite tricky so I decided to write about it. Let's start with the fact that I use Ninject as a dependency injection container. Among others Ninject allow us to control a lifetime of objects (instances). Particularly, in the case of web applications, we can use:
  • InRequestScope method - it tells Ninject that one object of a particular type should be created for each individual request.
  • InSingletonScope method - it tells Ninject that one object of a particular type should be created for all requests.
For example:
kernel.Bind(x => x
   .FromAssembliesMatching("test.dll")
   .SelectAllClasses().InheritedFrom(typeof(IInterfaace))
   .BindAllInterfaces()
   .Configure(z => z.InSingletonScope()));
The problem was that accidentally I mixed InSingletonScope and InRequestScope. For example, let's assume that each request requires objects of two classes A and B. Objects of type A are within the request scope and objects of type B are within the singleton scope.

Both objects perform updates/inserts/deletes and are used inside TransacionScope. For the first request it is not a problem. Both objects are initialized within the same request and use the same database connection. It means that a lightweight transaction is used.

However, for the second (and every next) request an object of type B is re-used whereas a new object of type A is created. Object of type B was initiated in the previous request and it uses a different connection than the one used by an object of type A. It means that a distributed transaction will be used in this case.

To sum up:
  • DI containers give a great power but with the power comes great responsibility.
  • Be careful when using objects of a different scope together. Especially when these objects require data access.
  • Be careful when using multiple connections inside TransacionScope. In the case of MSSQL 2005 in this situation a distributed transaction will be always used. In the case of MSSQL 2008 or newer it is possible to use more than one connection inside TransacionScope without automatic promotion. However, if and only if these connections are not opened at the same time.
  • TransactionScope automatically escalating to MSDTC on some machines? is a great source of knowledge about TransacionScope and about the process of promoting lightweight transactions into distributed ones.

06/05/2015

Interview Questions for Programmers by MK #2

Home

I prepared this question a long time ago in order to check knowledge of basic good practices.

Question #2
You have the following method which is responsible for updating a client in a database:
public void UpdateClient(
   int id,
   string name,
   string secondname,
   string surname,
   string salutation,
   int age,
   string sex,
   string country,
   string city,
   string province,
   string address,
   string postalCode,
   string phoneNumber,
   string faxNumber,
   string country2,
   string city2,
   string province2,
   string address2,
   string postalCode2,
   string phoneNumber2,
   string faxNumber2,
   ...)
{
   //...
}
Do you think that this code requires any refactoring? If yes, give your proposition. A database access technology doesn't matter in this question.

Answer #2
The basic problem with this method is that it has so many parameters. It is an error prone, difficult to maintain and use approach. I suggest to change the signature of this method in the following way:
public void UpdateClient(Client client)
{
   //...
}
Where Client is a class that models clients. It can look in the following way:
public class Client
{
   public int Id { get; set; }
   public string Name { get; set; }
   public string Secondname { get; set; }
   public string Surname { get; set; }
   public string Salutation { get; set; }
   public int Age { get; set; }
   public string Sex { get; set; }
   public Address MainAddress{ get; set; }
   public Address AdditionalAdddress { get; set; }
   /* More properties */
}
Address class contains details (country, city..) of addresses.

Comments #2
You may also write much more e.g.:
  • It may be good to introduce enums for properties like 'Sex' which can take values only from the strictly limited range.
  • UpdateClient method should inform a caller about the result of an update operation e.g. by returning a code.
However, the most important thing is to say that UpdateClient method shouldn't have so many parameters. Personally, if I see a code as above I immediately want to reduce the number of parameters. This question seemed and still seems to be very easy, however not all candidates were able to answer it. Maybe it should be more accurate. For example, I should have stressed that a candidate should focus ONLY on available code. What do you think?

27/04/2015

Interview Questions for Programmers by MK #1

Home

Do you know series of posts titled Interview Question of the Week on a SQL Authority blog? If not or if you don't know this blog at all you have to catch up. I learned a lot of from this series so I decided to start publishing something similar but to focus more on .NET and programming.

This is a first post from series which I called Interview Questions for Programmers by MK and in which I'm going to publish questions that I'd ask if I were a recruiter. Of course they are somehow based on my experience as a participant of many interviews.

Question #1
What is a meaning of using statement in the code below? What would you do if using keyword did not exist?
using(var file = File.OpenWrite(path))
{
   //...
}
Answer #1
In this example using statement is used to properly release resources (to call Dispose method) that are owned by an object of a class that implements IDisposable interface. It is a syntactic sugar and could be replaced by using try/finally block in the following way:
var file = File.OpenWrite(path);
try
{
   //...
}
finally
{
   if(file != null)
      file.Dispose();
}