26/12/2016

Migration to Nuget V3

Home


Source: own resources, Authors: Agnieszka and Michał Komorowscy

Sometime ago I wrote a post about My struggels with GitHub, Subtrees and Nuget and Joseph suggested me another solution of my problem in the comments i.e. to switch to Nuget V3. A few days ago I finally found time to give it a try. I started by reading this tutorial/post written by Oren Novotny. It's really a good source of knowledge so I'll not describe the whole process again. However, I encountered 3 problems that weren't described and I'll briefly write about them.

Project.json vs Project.config

A new Nuget uses project.json file instead of the old project.config so I started with adding this new file to all projects. Then I moved packages from the old files to the new ones. In the next step I reloaded all the projects just to be sure that VS will see changes. After that I built the solution to see if it works and it worked. Finally, I made a commit and pushed my changes. And here comes a problem. In a few minutes I got an e-mail from my Azure (hosted) build controller that a build has just failed.

§ Error: C:\a\_tasks\NuGetInstaller_333b11bd-d341-40d9-afcf-b32d5ce6f23b\0.2.24\node_modules\nuget-task-common\NuGet\3.3.0\NuGet.exe failed with return code: 1
§ Packages failed to install

The source of a problem was apparently a conflict between project.config and project.json so I just removed the former from the projects.

Naming problem

It is not everything. The next problem looked in the following way:

Failed to resolve conflicts for .NETFramework,Version=v4.6 (win).
Unable to satisfy conflicting requests for 'MVVMLight': project/MVVMLight (>= 1.0.0) (via project/MVVMLight 1.0.0)
Unable to satisfy conflicting requests for 'CommonServiceLocator': CommonServiceLocator (>= 1.3.0) (via project/MVVMLight 1.0.0)
Unable to satisfy conflicting requests for 'MvvmLight': MvvmLight (>= 5.1.1) (via project/MVVMLight 1.0.0)...

This time the fix was also easy. Nuget V3 doesn't like if the projects in the solution have exactly the same names as packages! In my solution I had MVVMLight project which is my playground for MVVMLight package. I renamed in to MVVMLightTest.

Last but not the least

After migration to Nuget V3 I had to deal with one more problem and again I didn't observe it locally but only when building on Azure (hosted) build controller. In the build log I found the following error:

The OutputPath property is not set for project 'LanguageTrainer.csproj'. Please check to make sure that you have specified a valid combination of Configuration and Platform for this project. Configuration='Debug' Platform='x86'.

And it turned out that in some csproj files I had the following condition:

<Platform Condition=" '$(Platform)' == '' ">x86</Platform>

It says that if the platform is not specified for a given build then use x86. At the same time these csproj didn't contain configuration, including the problematic OutputPath, for x86. To fix a problem I simply changed x86 to AnyCPU.

23/12/2016

Życzenia Świąteczne / Merry Christmas 2016

Home


Source: own resources, Authors: Agnieszka and Michał Komorowscy

Kolejny rok minął jak z bicza strzelił. Dużo się w nim działo, a w 2016 przynajmniej u mnie będzie się działo jeszcze więcej i z pewnością będzie ciekawie. Zanim to jednak nastąpi czekają nas Święta Bożego Narodzenia, czas odpoczynku, spotkań w rodzinnym gronie... i z tej okazji życzę Wam i Waszym bliskim wszystkiego dobrego.

Wszystkiego dobrego,
Michał Komorowski

*************************************************************************

Another year passed like a whip shot. Much has happened in 2015 but even more is going to happen in 2016, at least in my case;) But it is still the future and now Christmas is right around the corner. It is a time of rest, meetings with the family... and on this occasion I wish You and Your relatives all the best.

Best wishes,
Michał Komorowski

19/12/2016

Murphy's law in practice

Home

The photo comes from MSI website

You must have heard about Murphy's law saying that Anything that can go wrong, will go wrong. We can also add ...at the least expected moment or ...at the worst possible time. Some time ago I had quite a busy evening i.e. I had to write some urgent e-mails etc. Everything was ok until suddenly my computer (MSI GE60 2OE, Windows 10) went crazy.

12/12/2016

Did you know that about HTTP?

Home


Titile: Chapel of the Emerald Buddha in Bangkok, Source: own resources, Authors: Agnieszka and Michał Komorowscy

Recently, when answering a question on stackoverflow.com, I've learned an interesting thing about HTTP protocol. Actually currently it seems to be obvious to me ;) What I'm talking about? Well, initially I thought that if you send GET HTTP request in order to download a specific URL, then in a response you will get an entire page/file. In other words I thought that it's not possible to read the specific part of a page/file. However, it turned out that it's quite easy.

05/12/2016

You will love it

Home


A screenshot comes from ShareLaTeX web site

Sometime ago I wrote that I was amazed when I found out that Nuget supports C++. Today, I was amazed even more, when I discovered ShareLaTeX web site. This site is simply great and I don't know how I've been living without it for such a long time.

28/11/2016

Nuget in C++ rulez

Home


Source: own resources, Authors: Agnieszka and Michał Komorowscy

I haven't been programming in C++ for a very long time and I didn't expect that I would do it professionally in the foreseeable future. However, the life has different plans ;) Recently, I've joined an extremely interesting project in the area of the computer vision. In a while I'll try to write something about it.

22/11/2016

How to validate dynamic UI with JQuery?

Home


Source: own resources, Authors: Agnieszka and Michał Komorowscy

One of the most interesting task I developed some time ago was a library responsible for the generation of dynamic UI based on XML description in ASP.NET MVC application. The task was not trivial. The UI had to change based on the selections made by a user. I had to support many different types of controls, relations between them e.g. if we select the checkbox A then the text box B should be disabled and of course validations. In order to perform the client side validations I used jQuery Unobtrusive Validation library. I thought that it'll work just like that but it turned out that a dynamic UI may cause problems. Here is what I did.

16/11/2016

3 reasons why I don't use strict mocks

Home


Source: own resources, Authors: Agnieszka and Michał Komorowscy

The majority, if not all, of mocking frameworks provides 2 types of mocks i.e. strict & loose. The difference between them is that the strict mocks will throw an exception if an unexpected (not configured /set up) method was called. I prefer to use loose mocks because with strict ones unit tests are fragile. Even the small change in the code can cause that unit tests will start failing. Secondly, if you need to set up many methods a test becomes less readable. Now, I can see one more reason.

31/10/2016

Roslyn - How to create a custom debuggable scripting language 2?

Home


A screenshot comes from Visual Studio 2015

In the previous post I explained how to create a simple debuggable scripting language based on Roslyn compiler as a service. By debuggable I mean that it can be debugged in Visual Studio as any "normal" program for example written in C#.

27/10/2016

Roslyn - How to create a custom debuggable scripting language?

Home


A screenshot comes from Visual Studio 2015

Sometime ago I decided to play a little bit with Cakebuild. It's a build automation tool/system that allows you to write build scripts using C# domain specific language. What's more it is possible to debug these scripts in Visual Studio. It is interesting because Cake scripts are neither "normal" C# files nor are added to projects (csproj). I was curious how it was achieved and it is result of my analysis. I'll tell you how to create a simple debuggable scripting language. By debuggable I mean that it'll be possible to debug scripts in our language in Visual Studio almost as any "normal" program in C#. Cakebuild uses Roslyn i.e. a compiler as a service from Microsft and we'll do the same.

24/10/2016

.NET Developer Days 2016 - Grand finale

Home


The time has come to summary .NET Developer Days 2016. I think that each conference can be judged based on 3 main factors: organisation, presentations, networking so I'll write a few words about each of these topics.

18/10/2016

Do not forget about GO

Home


Source: own resources, Authors: Agnieszka and Michał Komorowscy

Almost 4 years ago, I wrote a short post in Polish about problems that may occur if we forget about GO keyword in our scripts. I decided to write this post again, this time in English, because recently I helped to fixed exactly the same problem again. As a remainder, GO keyword instructs tools like SQL Management Studio, sqlcmd... to send the batch of T-SQL code to the server. Now, let's look at the following code that creates a stored procedure and tell me what is wrong here:
CREATE PROCEDURE dbo.pr_Fun
AS
BEGIN
    /*...*/
    RETURN
END

GRANT EXECUTE ON dbo.pr_Fun TO public
GO

21/09/2016

Report from the battlefield #6 - Auto-Property Initializers + a non-binary serialization

Home


Source: own resources, Authors: Agnieszka and Michał Komorowscy

I can bet that you've already heard about and used Auto-Property Initializers and that you love them. At least I do so ;) Here is a small example. In this case an Auto-Property Initializer was used to generate unique identifiers for instances of Entity class. Trivial, isn't it?
public class Entity
{
   public string Guid { get; } = System.Guid.NewGuid().ToString();
   /*...*/
}
What is important we have guaranteed that a given initializer will be executed only ONCE for an instance of a class. Otherwise it will have no sense! In other words our expectations is that if we create a new instance of Entity class it's identifier will not change. It is generally true, but there are some caveats.

16/09/2016

.NET Developer Days 2016 - Workshops

Home


In my previous post about .NET Developer Days 2016 I wrote generally about the conference and about presentations I'd like to see. This time I want to drop a few lines about pre-conference workshops (sessions). They will take place just a day before the actual conference (GoldenFloor, Millenium Plaza – Warsaw, Al. Jerozolimskie 123 a) and you could choose from:
The links above will lead you to the description of each session. However, I have a surprise for you. I contacted with experts who will conduct workshops and asked them a few questions. Here are additional information that I got. You'll not find them anywhere else.

31/08/2016

AjaxExtensions.BeginForm doesn't work. Really?

Home


Source: own resources, Authors: Agnieszka and Michał Komorowscy

The goal of using Ajax is to communicate with the server asynchronously without reloading the entire page. Specifically AjaxExtensions.BeginForm can be used to updated a selected part of a web page. It is relatively easy in use but can be also troublesome. Especially, when we try to apply it in an application which wasn't using Ajax earlier. I decided to wrote this short technical post because recently I came across the following issue the few times:

AjaxExtensions.BeginForm redirects a user to a new page instead of refreshing a fragment of a current one.

This problem has an easy explanation. Under the hood AjaxExtensions.BeginForm uses Java Script library called Microsoft jQuery Unobtrusive Ajax. The issue is that this library is not installed by default if we create a new project. It's easy to forget about it.

If you have the described problem:
  • Check in packages.config file contains Microsoft.jQuery.Unobtrusive.Ajax package.
  • Check if jquery.unobtrusive-ajax.js file is referenced in html e.g.: <script src="/scripts/jquery.unobtrusive-ajax.js"></script>
  • If you use bundles checik if jquery.unobtrusive-ajax.js was included in a bundle e.g.:
    public static void RegisterBundles(BundleCollection bundles)
    {
       ...
       var js = new ScriptBundle("~/bundles/MyBundle").Include("~/Scripts/jquery.unobtrusive-ajax.js");
       ...
    }
  • Besides, check if a bundle with jquery.unobtrusive-ajax.js is rendered properly e.g.:
    @Scripts.Render("~/bundles/MyBundle")

23/08/2016

.NET Developer Days 2016 are coming

Home


.NET Developer Days 2016 is a third edition of the biggest conference in Central and Eastern Europe dedicated to the .NET platform. I didn't participate in previous editions but this time I'll be. Why? Well, I read a few quite good reviews of a former editions. Besides, recently a friend of mine told me that he was going to go there what is also a good recomendation.

To make things funny, when I was about to buy tickets organizers of the conference asked me to write about it. So yes it is a sponsored text but I wouldn't write it if didn't want to go there anyway. Let's start with a few facts about .NET Developer Days 2016:
  • What: 3 tracks with 24 presentations.
  • Where: 
    • Conference: EXPO XXI Exhibition Center – Warsaw, Prądzyńskiego 12/14
    • Workshops: GoldenFloor, Millenium Plaza – Warsaw, Al. Jerozolimskie 123 a
  • When: 19th-21th October 2016. The workshops will take place on October 19th, and the conference will start one day later. Organizers also plan a party at the end of day one. I think that it'll be a good occasion for networking.
  • Language: 100% English
I'm still thinking which presentations to choose but I have a few solid candidates. I remember Jon Skeet from Dev Day conference. He gave a really good presentation so it is my number one. This time he will start the conference and then will talk about Abusing C# and Immutability. I also saw a few presentations delivered by Tomasz Kopacz in the past. As far as I remember he was always a mine of information. His presentations were advanced and demanding but you could learn a lot from them.

I also heard a lot of good about Maciej Aniserowicz so his presentation about CQRS is also on my list. I don't know other speakers but there are many other promising topics to choose from. For example, I'd like to listen Alex Mang who will talk about containers or Adam Granicz who will give a presentation about funcional programming or ... Actually, I already see some potenial conflicts in my personal agenda so as you can see the choice is not easy. I encorage you to see the full agenda on the conference site. If you want to buy a ticket do it sooner than later because the price goes up every 2 months.

See you there!

19/08/2016

My struggels with GitHub, Subtrees and Nuget

Home


Source: own resources, Authors: Agnieszka and Michał Komorowscy

Some time ago I decided to publish my projects on GitHub. This decision had very positive repercussion because it mobilized me to do more refactoring and to clean my solutions. Additionally, I switched totally to management of external references via NuGet. Earlier, I had some binaries in a dedicated directory on my computer. It took me some time but it was worth doing it.

I also had to solve the following problem. I have a solution called Common. It is a collection of libraries, utilities, algorithms, helpers etc. that are used in my other projects. Before migration to GitHub, after a build, all Common binaries were copied to the well known location i.e. N:\bin Thanks to that all other projects could reference them from this location. It works. However, if someone wants to download my projects from GitHub, he or she will need to create a mapped drive N manually. I din't like it.

The next step was to switch from absolute references to binaries to relative references to projects. For example let's consider a library MK.Utilities and a project LanguageTrainer that uses it. Initially LanguageTrainer was referencing:

N:\bin\MK.Utilities.dll

After migration this reference was changed to:

..\..\Common\MK.Utilities\MK.Utilities.csproj

Much more better, isn't it? Still, it is not perfect. This relative path will work only if the folder with Common and LanguageTrainer solutions will be in the same place on the disk. Besides, in order to compile LanguageTrainer solution Common solution must be built first. What's more Common and LanguageTrainer are two separate repositories which have to be downloaded individually. My goal was to be able to download any repository/solution and then be able to compile it without any further steps.

I started reading about possible solutions and I found information about git submodules and git subtrees. There is so much about them in Internet so I will not repeat others. For example see this post. At the end I decided to use subtrees. Simplifying a subtree is a copy of some repository in the another one. Returning to my earlier example, by using subtrees I got a copy of Common repository/solution in LanguageTrainer repository/solution. Then I could change the reference to MK.Utilities as follows:

..\Common\MK.Utilities\MK.Utilities.csproj

Besides, I needed to add MK.Utilities project to LanguageTrainer.sln solution so that all projects could be build at the same time. Finally, my LanguageTrainer repository/solution looks in the following way. On left side you can see GitHub and on the right side Visual Studio:


If needed I can refresh a copy of Common repository/solution inside LanguageTrainer at any time. Why I used subtrees? Well, they work for me and are extreamly easy to create via Source Tree ;) By the way, I like git but I hate all this complex commands. Source Tree solves this problem and allows me to use git via friendly GUI. I strongly recommend it.

At the end I had to solve one more problem with Nuget. Let's return again to my example and let's assume that LanguageTrainer solution is located here:

C:\LanguageTrainer

In that case, by default, Nuget packages will be located here:

C:\LanguageTrainer\packages

However, we also have a subtree:

C:\LanguageTrainer\Common

And projects from a subtree expects that their packages will be here:

C:\LanguageTrainer\Common\packages

Of course they won't be there so Common projects will not compile. To overcome this problem I had to manually update csproj files and replace ..\packages with $(Solutiondir)packages. For example:

..\packages\structuremap.3.1.6.186\lib\net40\StructureMap.dll

Was changed as follows:

$(SolutionDir)packages\structuremap.3.1.6.186\lib\net40\StructureMap.dll

I hope that this post will help you in your struggles with GitHub, Subtrees and Nuget.

06/08/2016

Why did I do my PhD?

Home


Source: own resources, Authors: Agnieszka and Michał Komorowscy

This is my second post about the longest project I've ever participated in i.e. about doing PhD. I decided that at the beginning I'll write why I actually started my Ph.D. studies and what I think now about my motivations.

I remember quite well this time in 2009 when I made my decision to get a Ph.D. It was supported mainly by a few things. Firstly, at that point I was a newly minted graduate of Warsaw University of Technology and I had a very good memories of my studies, being a student... and I wanted to continue that. Secondly, I wanted to do something different from what I was doing professionally for money i.e. typical applications for business.

Thirdly, I associated Ph.D. title with the some kind of prestige that will allow me to distinguish myself in the future. Fourthly, a half year earlier my wife and a few of my friends also started Ph.D. studies. Don't understand me wrong. I wasn't jealous, I didn't feel worse or something. But taking into account what I've written earlier it was an additional motivation for me.

What I think about these motivations now? I'd say that they are neither good nor bad but they simply are. However, I think that I missed a few important thing in 2009. Do you agree with me that my way of thinking was somehow romantic? Now, I know that it was. I assumed that I would be working professionally for money and I would be doing Ph.D. to "do something different". I didn't think much about my future scientific carrier, doing habilitation... I also didn't think much what I actually want to achieve during my studies. Though thanks to that I had an occasion to play with technologies like Azul Systems or Agilent N2X before focusing on historical debuggers :)

Briefly summarising my Ph.D. studies were a little bit like a hobby. And as any hobby, on the one hand it gives you fun and satisfaction, but on the another hand it doesn't necessarily lead you to something, and can be easily set aside or abandon. Many times I had a moment of doubt or wanted to say stop.

Would I made the same decision if I could go back in time? Definitely yes, but I'd consider it much more carefully. I'd think through area of my research so that it would be more perspective and more valuable on the market. Probably because of financial reasons I'd share my time between Ph.D. and the professional job. However, I'd try to find a job somehow related to my studies, where Ph.D. could be potentially beneficial. Currently in the vast majority of job offers (that I receive) it isn't. I'd also consider doing Ph.D. abroad (outside of Poland) where funding is better so that I'd be able to focus on science. Thanks to that my result would be also better.

You must be both romantic and pragmatic when doing Ph.D.

24/07/2016

Report from the battlefield #5 - Logging can kill performance

Home

Public Domain, https://commons.wikimedia.org/w/index.php?curid=48390
Source: own resources, Authors: Agnieszka and Michał Komorowscy

So far in Report from the battlefield series I wrote about my experiences as an expert in the recruitment company. This time I'll write a bug that I found in the production. It was all about the performance. The problem was that in the new version of an application one operation slowed down about 6 times. Initially, I suspected that amount of data simply increased considerably or some network problems. Fortunately, I easily reproduced the issue on my dev machine. Reproducing a problem is half the battle. Though performance problem are usually difficult to analyse so I was ready for a long investigation.

I started stepping through the code with a debugger just to see what is going on. Everything seemed to be ok until... One of the final operations was to log into a file what was retrieved from a database. What's important the log level was set to Trace so even large amount of data shouldn't matter in the production. Why? Because in the production, precisely because of the performance reasons, the logger should be configured not to log everything to a file. In other words it should ignore messages usually with the log level = Trace or Debug. However, after I had pressed F10 (Step Over), I had to wait a few seconds till the logging ends. BINGO!

My first though was that someone configured the logger in the wrong way in the production. Typical PEBKAC problem. To verify my hypothesis I changed the configuration of the logger and executed the problematic operation. Unfortunately, the problem occurred again. Another look at the code and I know what was wrong. And do you already know?

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

The problem was that for large amount of data the application required a few seconds just to create/prepare a message for the logger. To make thing worse this message was created, regardless if it was later used by the logger or not. During development it may acceptable but not in the production! There are 2 potential solutions of this problem. Details depends on the logging framework:
  • The first approach is to simply check the logging level before creating/preparing a message e.g.:
    if(Logger.LogLevel == LogLevel.Trace) 
    {
        /* Prepare and log a message */
    }
    
  • The second approach is to use deferred execution for example lambdas e.g.:
    Logger.Trace(() => /* Prepare a message */).
    If a logger supports this syntax, a lambda will be executed if and only if it is required.

17/07/2016

The longest project

Home

Source: own resources, Authors: Agnieszka and Michał Komorowscy

I haven't been blogging for 4 months and it's the longest break I've ever had. Why? Was I sick? Did I have no ideas what to write about? Did I have no time? Did I have too much work? Fortunately, nothing of that. The reason is completely different and probably surprising. So, I finished the longest project in my life.
  • The project that I started in 2009.
  • The project that for all these years was somewhere in my mind.
  • The project that I wanted to abandon over a dozen of times.
  • The project that took hundreds or thousands of hours.
  • The project that allow me to learn a lot of.
  • The project that I would have done in a different way if I had had this chance.
  • The project of which I'm extremely proud.
  • The project after which I simply had to rest.
What could it be? The answer is PhD in Computer Science. On 12 April 2016 I defended my doctoral dissertation, written under the supervision of Professor Janusz Sosnowski, under the title:

Methods of analysis of information systems based on logs of historical debuggers

Even now I remember how relieved and happy I felt then :)

In my work I focused on the problem of storing and analysing of data collected by historical / reversible debuggers. I performed a detailed analysis what could be and what should be improved when it comes to working with them. In the result I proposed new models of representation of execution traces and I implemented tools that facilitate working with data recorded by historical debuggers. I also performed experiments showing advantages of my ideas. It was a really, really huge job.

Now you may want to ask some questions:
  • Was it worth it?
  • Why did you do so?
  • Did you work professionally at the same time?
  • How did you share time between PhD studies with your work? Is it possible at all?
  • What did you actually gain?
  • How to start PhD studies?
  • How much could I earn at the university?
  • Would you continue your science career?;
  • Why you didn't write about PhD earlier
  • And many, many more.
I plan a series of post about doing PhD in the computer science. Many topics will be specific for Poland but many will be general. I want to do that because of two things. Firstly, it'll be a form of therapy for me :) I simply want/need to write about something that was so important to me for such a long time. Secondly, I think that there are not so many blogs/articles about writing PhD so it should be simply useful for others.

If you have any specific questions just let me know.

18/03/2016

Two things I learned about HTML and CSS

Home

I've never worked a lot of with CSS. However, from time to time I do something with it, for example in order to check out new possibilities. Recently, I read about CSS transformations and I decided to give it a try. For the beginning I wanted to achieve a very simple effect i.e. a red square with a blue and a green diagonal lines. It sounds simple and it is simple but there are traps in this exercise. I decided to write this post because it took me a moment to figure out that was wrong. It was also difficult to find a solution in Internet. Maybe because it is so obvious ;)

My idea was to use 3 div elements. One for a square and 2 for diagonal lines. I also wanted to use transformations in order to rotate divs so that they look like diagonals. My first attempt looks as follows. Do you know what is wrong? There are 2 main problems here.

Scroll down if you want to see a correct solution:
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.


I changed two things, one in html and one in CSS:
  • The first problem was that I used div as a self closing tag. It is not allowed. Browsers treat <div /> as <div >. It is quite difficult to spot.
  • The second problem was in the greenLine style. It was not enough to rotate the green div by 45 degrees. Firstly we need to translate it in this way: translate(100px,-141px) rotate(45deg). It might be surprising because in the case of the blue div the rotation was enough. However, we we have to remember that the green div is not located in the origin of the coordinate system but just below the blue div. The blue div looks like a thin line but it's height is set to 141 pixels.

14/03/2016

Report from the battlefield #4 - Do not waste my and your time

Home

The Report from the battlefield series is based on my experience as a reviewer. The idea is simple. In order to evaluate programming skills, a candidate is asked to write a simple project. To do so he/she needs to invest some amount of time (roughly speaking a few hours). Taking this into account I assume that he/she must be interested in finding a new job. Otherwise he/she wouldn't spend his/her private time writing a project which rather isn't extremely exciting. The more I'm surprised why some people doesn't care about the first impression.

Here are some examples showing what I'm talking about:
  • A connection string used by the application referred to some server that of course wasn't available to me.
  • The database used by the application didn't contain any sample data.
  • I had to manually create a database used by the application. There was a script but it didn't work without fixes.
  • The application crashed immediately when started.
  • ...
It's wasting of time from my perspective. It is true that all these problems can be fixed quickly but they require additional effort from me. You can believe me that it is extremely annoying. Instead of making an actual review someone forces me to fix bugs. What's the worst these bugs could be avoided easily with a little bit more effort.

Please remember, the first impression is important. It'll be appreciated if a reviewer can run your application just by pressing F5 in Visual Studio (or in another IDE). You can test it in a straightforward way. Before submitting a project to a review, copy it to another machine and try to run it there. It should work without any additional actions.

Currently, if a project cannot be run without problems I don't make a review. However, I have a soft heart and I give a candidate one chance to fix them. Do you think that it's a good approach? I have my doubts because an employer probably wouldn't do so.

27/02/2016

Tips & Tricks: How to tell VS to modify variables in the runtime for us?

Home

Today, I'd like to share with you a simple but useful trick. Imagine yourself that you are debugging an application and you find a place with the following very simple code:
            
var flag = ReadConfiguration();
if (flag)
{
   //...
}
else
{
   //...
}
The problem is that the flag variable is set to false but you need to check what would happen if it is set to true. Of course you can easily change the value of this variable in Visual Studio. But what would you do if this kind of code is executed dozens, hundreds... of times and every time the flag variable must be set to true? One solution is to modify a configuration, another might be to change the source code. However, all these things require an additional action. It would be much more better to tell Visual Studio to do it for us. How? In order to achieve desired effect we can utilize breakpoints and custom actions. I'll show how to do it in Visual Studio 2015.

Firstly, put a breakpoint in the line with if.


Right click the breakpoint and from a context menu select Actions... Then in the text box enter {flag = true}. You can even use IntelliSense here. At the end click Close button.


An that's all. Now, if you run the application under debugger control a flag variable will be set to true whenever a line with a breakpoint is executed. What's more this trick also works with other types of variables and you can also execute methods in this way e.g.:


At the end I want to say 2 things. Custom actions are usually used to write diagnostic messages to Output window. This trick works because in order to write a message Visual Studio has to execute some code and this code can have side effects. Besides, you can also use this trick in older versions of Visual Studio. The only difference is that from a context menu you need to select When Hit... option.

21/02/2016

My list of online editors

Home

Online editors (testers, debuggers) are awesome if we want to quickly test some code. They are also very useful to check our solution when we want to post an answer on Stack Overflow. Here is my collection of various online editors that I encountered, though personally I use only some of them.

I'm publishing it because it can be helpfull for others and because I'd like to have this list easily accessible in Internet. Of course this list is not complete and there are many other editors. If you know something interesting let me know and I will add it here.


Online Editor Language / Technology Share function Collaborate function
yUMLUMLYes
draw.ioDiagramsYes (via Google docs)Yes
moqupsUI moqupsYesYes
ideoneC#, Java, Haskell, C++, Ada and many otherYes
SQL FiddleSQL (MySQL, Oracle, PostgreeSQL, SQLLite, MSSQL)Yes
regular expressions 101Regular expressionsYes
.NET FiddleC#, VB.NET, F#YesYes
C# PadC#
D3.jsD3.js Java Script library
CodePenHTML + CSS + JSYes
jsfiddleHTML + CSS + JSYesYes
JS BinHTML + CSS + JSYesSoon
CSS DeckHTML + CSS + JSYesYes
LiveweaveHTML + CSS + JSYesYes
PlunkerHTML + CSS + JSYesYes
cpp.shC++Yes

By share function I mean a possibility to create a pernament link to our code. Collaborate functions allows a group of developers to write a code together.

16/02/2016

Interview Questions for Programmers by MK #7

Home

Question #7
You have the following code that uses Entity Framework to retrieve data from Northwind database. Firstly it finds customers that are from London and then process their orders. All data model classes were generated with the code first from database approach. Unfortunately, this code contains a bug that can lead to performance problems. Identity this problem and propose a fix.
using (var ctx = new NorthwindContext())
{
   var londoners = ctx.Customers.Where(e => e.City == "London");
   foreach (var londoner in londoners)
   {
      foreach (var o in londoner.Orders)
      {
         foreach (var d in o.OrderDetails)
         {
            //....
         }
      }
   }               
}
Answer #7
This code is a classical example of N+1 select problem where too many queries are sent to a database. The first query will be sent to a database in order to find customers from London. Then for each customer another query will be sent to read orders. Finally, for each order another query will be sent to retrieve details of a given order. Instead, all data could be retrieved by sending only one query. To do so we need to tell EF that we want to read customers together with their orders and orders details. It can be achieved with Include method.
using (var ctx = new NorthwindContext())
{
   londoners = ctx.Customers.Include("Orders").Include("Orders.Order_Details").Where(e => e.City == "London");
   ...
}
A quick test will show that originally 53 queries are sent to a database and after a fix only 1.

07/02/2016

Report from the battlefield #3 - IEnumerable vs IQueryable

Home

Sometime ago I was reviewing the data access layer that was based on Entity Framework. I found a code which immediately attracted my attention. The simplified version is shown below.
public IList<Product> GetAll()
{
   return ctx.Products.Select(p => new Product() { ... }).ToList(); 
}
...
var numberOfProducts = GetAll().Count();
GetAll method is pretty simple because it just reads products from a database. The result returned from this method is used to count number of products in the database. Although it is simple it contains a serious bug. The problem is that it uses ToList method to return a list of products. It causes that ALL products must be retrieved from the database in order to return them in the form of a list. In other words there is no deferred execution here.

If we work with a local database and the number of products is small it shouldn't be a problem. However, this kind of code might lead to difficult to analyse performance problems. For example if our application uses a remote database and/or there are thousands of products. The desired behaviour is that products are counted by a database engine. So let's try to make a fix:
public IEnumerable<Product> GetAll()
{
   return ctx.Products.Select(p => new Product() { ... });
}
...
var numberOfProducts = GetAll().Count();
Now it looks much more better, doesn't it? GetAll doesn't use ToList and returns IEnumerable. interface. Unfortunately this solution is far from being perfect. In comparison to the first version, the only difference is the moment when all products are retrieved from the database. This time it will happen when Count method is executed. Why? Before I'll explain let's see the correct solution:
public IQueryable<Product> GetAll()
{
   return ctx.Products.Select(p => new Product() { ... });
}
...
var numberOfProducts = GetAll().Count();
This time I used IQueryable instead of IEnumerable. This small change is crucial. It causes that no products are read from a database. Entity Framework "sees" that we only wanto to count number of products and an appropriate query is sent to a database. In other words LINQ To Entities is used.

The situation is completely different when we work with IEnumerable. In order to understand a difference we have to realise one thing. Count method for IEnumerable is something different than Count method for IQueryable. With IEnumerable we use LINQ To Objects and LINQ To Objects operates on objects in memory, it cannot communicate with a database. It is why all products must be read from a database if we want to count them.

Now someone inquiring can say that for virtual methods it shouldn't matter if we have variables of type IEnumerable or of type IQueryable if these variables points the same objects. After all C# is an object oriented language that supports polymorphism etc. Well, it is all true but only for virtual methods and Count is not a virtual method. It is an extension method and extension methods don't support polymorphism.

05/02/2016

Sandbox Database Manager

Home

My colleague Tomasz Moska published very nice tool that makes management of development MSSQL sandbox databases very easy. It is called Sandbox Database Manager and you can download it here or from GitHub.

Why is it worth recommending? Try to imagine yourself situation like this. A tester found a bug in the application. In order to reproduce it you need a copy of his database from a system test environment. With Sandbox Database Manager you can make a copy of this database and restore it on a selected server with just a few clicks. Another click or two and you have a snapshot created. Thanks to that you are be able to revert the database to its original state at any time. Now let's assume that this database contains hundreds of tables and you don't know all of them. To investigate a problem you want to run an application and see which tables (probably dozens of them) will be updated and how. Sandbox Database Manager also supports this scenario because it'll allow you to track data changes at the column level.

These are only a few features of Sandbox Database Manager. It can do much more, for example to run the same query against many databases or compare data between two databases. I can guarantee that Sandbox Database Manager is a really, really helpful tool because I use it in my day to day work. I recommend it without any hesitation. What is the best you can use it completely for free!