TeamCity 9 project settings versioning in Git

Anse CafardOne of the great new feature of TeamCity 9 is the possibility of Storing project settings in Git and Mercurial.

When you develop software it is primordial to be able to reproduce successfully builds. To achieve that goal you need for sure first to version the source code. But too often the build scripts are forgotten! Especially when the build scripts are created with such a great tool that is TeamCity.

So we want to keep the source code and the configuration of the build server quite near. So that we are sure we can always rebuild a previous version of the software.

What we don't want is to have a mixture of source code and build configurations. To achieve that goal we can use the Git possibility to create orphan branch

> git checkout --orphan teamcity/settings

Then we remove all content from the old working tree, normally your current source code. No worries, the other files are kept in the other branches!

> git rm -rf .

We add a explaining that this branch is about storing the build server settings and we make a first commit

> git add
> git commit -m "Initial TeamCity build settings commit"

And finally we push that to the origin git repository

> git push origin teamcity/settings

Now on your TeamCity server you can follow the instruction in Storing Project Settings in Version Control to define that TeamCity must version all changes which are done to your project.

We do it on the top most project so that we get all stored in our Git repository.

To achieve that we define a new TeamCity VCS Root pointing to our newly created orphaned branch; teamcity/settings and finally click the Apply button.

After some seconds you will get in your Git repository a second commit done by TeamCity containing all the configurations files!

Nice new feature!


Git Diff Margin v3.0 released

25 Days after the v2.0 I am pleased to announce the v3.0 of Git Diff Margin!

Git Diff Margin displays live Git changes of the currently edited file on Visual Studio margin and scroll bar.

Thanks to the great work of Sam Harwell Git Diff Margin v3.0 now support Visual Studio 2010, 2012, 2013 and Visual Studio 14 "CTP".

Here are the release notes

New features

  • Support for Visual Studio 2010, 2012 and Visual Studio 14 "CTP"
  • Show diff using Visual Studio Diff window except for Visual Studio 2010 which still use external diff tool
  • Possibility to define shortcuts for next/previous change navigation
  • Add options for highlighting untracked lines #29
  • Update icons


  • Improve external diff configuration handling in .gitconfig #32
  • Improve "removed" glyph and editor diff positioning
  • Improve support of Dark, Light and Blue theme
  • Make sure the text editor is focused after a rollback
  • Prevent ScrollDiffMargin from affecting the scroll bar behavior
  • Play nice with other source control providers


  • Fix Show Difference fails with DiffMerge for file names with spaces #38
  • Fix submodules issue #40


Git Diff Margin v2.0 released

I am pleased to announce that Git Diff Margin v2.0 is released!

Git Diff Margin displays live changes of the currently edited file on Visual Studio 2013 margin and scroll bar.

You can download it from Visual Studio Gallery and get the source code on Github

Here is a screenshot

And a 30 seconds video


It's features

  • Quickly view all current file changes on
    • Left margin
    • Scroll Bars in map and bar mode with and without source overview
      • blue rectangle for modifications
      • green rectangles for new lines
      • red triangles for deletions
      • all colors configurable through Visual Studio Fonts and Colors options
  • Undo the change
  • Copy the old code into the clipboard
  • Copy a part of the old code by selecting it in the popup
  • Show the diff in configured Git external diff tool
  • Navigate to previous/next change on the file
  • Support Visual Studio 2013 Dark, Light and Blue Theme
  • Support zoom

Using TeamCity integrated dotCover coverage files with NDepend

Byron Bay 2013-08-26 038_DxO

For a long time I wanted to integrated NDepend on our build server so that this week I invested some time here and there to achieve that goal. I did that already a long time ago, I even wrote the documentation which you can read on NDepend website.

This time I wanted to go one step further.

We use a first build which is building each feature branch we are developing. One of the responsibility of that build is to run unit tests, integrations tests, specifications and gather code coverage. To achieve that we are using the TeamCity integrated dotCover in each build steps running our different tests. This is collecting artifacts which aren’t directly shown on the Artifacts tab:

Clicking show reveal the file we are interested about, dotCover.snapshot:

Something to note is that to be able to use code coverage NDepend needs the pdb files, this is why we have another artifact named And finally the third is our software with the exe and dlls.

Now that we have a build which generate the coverage file which we want to pass to NDepend, let’s create another TeamCity build which will define Snapshot Dependency and Artifact Dependency to the previous build:

We are extracting the exe to a NDepend folder and all dlls out of the archive Libs folder to the same NDepend folder.
We do the same for the pdb files so that NDepend can use the code coverage data.
Finally we extract the dotCover.snapshot to a dotCover folder.

Then the issue we had was that the dotCover.snapshot file is not of the format that NDepend is expecting.

So as a first build step of our NDepend build we need to convert dotCover.snapshot file, this is done using a Command Line build step and dotCover integrated in TeamCity, using the report command and the ReportType equal to NDependXML:

So after that build first step we have a new converted file; dotCoverNDepend.xml which can be consumed by NDepend.

Then in the second build step we are using dotCoverNDepend.xml with the new NDepend 5.1 CoverageFiles command parameter:

Here is the full command

C:\NDepend\NDepend.Console.exe "\skyeEditor.ndproj" /CoverageFiles "\dotCover\dotCoverNDepend.xml" /InDirs "\NDepend" "C:\Windows\Microsoft.NET\Framework\v4.0.30319" "C:\Windows\Microsoft.NET\Framework\v4.0.30319\WPF" /OutDir "C:\NDependOutput"

This will create the NDepend report which we will archive as an artifact, on the General Settings of the build

Then you will need to define that you want to see the NDepend report as a TeamCity Report Tab, which you define by navigating to Administration > Report Tabs, clicking Create new report tab and specifying

Finally you will have the following NDepend report with code coverage shown for your builds

One last thing I struggled about is that the NDepend builds were not started, because I thought it was enough to configure the Dependencies, but you need to define also a Build Trigger with Trigger on changes in snapshot dependencies ticked:

Thanks to Yegor for the discussion which greatly helped as always! And also to Ruslan which also helped through the post on the jetbrains forum. And finally thanks to Patrick who introduced really fast the /CoverageFiles in NDepend, so do not hesitate to give feedback using the new NDepend user voice.


Optimizing release build process using JetBrains TeamCity and Atlassian Stash

Byron Bay Light House

Automating and optimizing the processes I use everyday to work is something important so that I get more productive and spend less time in things that a computer is better at.

Previously I had 3 builds defined in TeamCity one for all feature branches, one for release and one for patch. For feature branch and patch branch I needed to go to TeamCity to define two Build Parameters: the branch name and the release number.

My goal was to avoid to go to TeamCity when we have a release and have to set those Build Parameters.

I wanted one TeamCity build which would

  1. Determine automatically the version number
  2. Determine the branch to use to build that release

The second point was easy! You just need to follow the same principles defined for feature branches. I defined the following branch naming convention: any release should have a branch name like this release/skye-editor-2.26.0 for a release of Skye Editor 2.26.0. Then I defined in my VCS Root of my TeamCity Build the branch specification:


and the same for the Build Triggers / VCS Trigger / Branch filter.

Now the first point is a bit more complex!

As we want to determine automatically the version number we quickly realize that the release number is defined in the branch name itself, e.g. release/skye-editor-2.26.0.

So why not use it? Yeah great idea but how?

First idea that came was to pass that value as a parameter to the build script and deal with splitting the branch name to the release number into the build script. As Yegor was confirming that currently TeamCity has no way to parse values. I didn’t really like that idea of passing that parameter! So I continued to think about alternatives and finally came to ask Yegor:

I had another idea, in teamcity when the active builds are displayed for a feature branch build only the second part is displayed, e.g. if I specify as branch specification SKYE-* and I have a branch with SKYE-1077-blabla then it wil show 1077-blabla. Is there a build parameter which would map this 1077-blabla?

And the answer was

If you have something like refs/heads/SKYE-(*), then only the part in the brackets is regarded as logic branch name. Logic branch name is available as

That’s it! Thanks Yegor. We have the way to get our version without having to change our build script and without writing any code!

So I just used the logical branch name as my configuration parameter CurrentRelease which was already existing and that I had to manually update before:

Replacing the previous two manual configurations

So now I create easily a Git release branch; release/skye-editor-2.26.0, then use pull requests of Atlassian Stash to review and merge my feature branches to that release branch which is automatically built using TeamCity.

This is a great improvement and shorten my release check list, all good!


Functional Programming Principles in Scala using JetBrains IntelliJ IDEA

Another great shot of our friend the Whale of last week in Cape BridgewaterThis week I started the Coursera course Functional Programming Principles in Scala with Martin Odersky as instructor.

One of my first step was to have JetBrains IntelliJ IDEA working with the Scala programming language so that I can work on the course assignments using my preferred IDE.

I have seen that The Guardian published a blog post Functional Programming Principles in Scala: Setting up IntelliJ but it focus on the Unix-like operating systems (OSX and Linux). As I work on Windows I for sure installed it on my machine running Windows 8.1! I also use a different set of tools so here is the way I did it!

First of all I am assuming that you have JetBrains IntelliJ IDEA 12 and JDK installed.

From IntelliJ you need to install the Scala plugin so go to File / Settings then on the dialog search for plugins, click on Browse Repositories button and search for Scala then select Scala v.0.19.299 from JetBrains

Then you also need to install the SBT plugin also from JetBrains which is still in development and get nightly builds.

Add the following URL to the list of custom plugin repositories in Settings / Plugins / Browse Repositories / Manage Repositories:

After that, you may install the plugin via Settings | Plugins | Browse Repositories. IDEA will check for the latest plugin builds automatically.

Now extract the assignment zip that the coursera course is providing; e.g. for the example assignment, unpack the to a folder then in IntelliJ use File / Import Project. IntelliJ already recognize the folder as project:

Click Next

Tick Use auto-import and click Finish. IntelliJ will then import your project:

Now you need to edit your project structure using File / Project Structure and choose Modules, then progfun-example

You will see on the dialog for the Source and Test Source Folders some folder in red, just click on the X to delete all the one in red to get to the following state, then finally click OK

Now navigate to the test file ListsSuite and press ALT – SHIFT – F10 to start the tests

And if you implemented the assignment you should see this results:

There is still two tests to fix in the first assignment, this is why there are red!

But as you can see now you have a full Scala development environment based on the great JetBrains IntelliJ IDEA IDE! Now you can code with pleasure!

I really enjoy the first week assignments and implemented the first two as coding Kata! And I even involved France-Anne for the second one; first time that she is coding something Smile . Very nice!


Uninstalling a program which doesn't want to on Windows 8.1

I updated to Windows 8.1 RTM a week ago and since that time I had the issue that AMD Radeon™ RAMDisk was not running but even worse I could not uninstall it because the uninstaller was saying that it was the wrong version of the operating system!
I was using it on Windows 8, it was working great on it but after the update it wasn’t anymore.

The solution I found is to use Microsoft Fix It – “Fix problems with programs that can't be installed or uninstalled”. I followed the wizard, could pick up that I had an uninstall issue then I could choose which software was in trouble and finally could uninstall it!

Great tool! Thanks Microsoft Fix it!

Git // GitHub

Git and GitHub Training

Today I had the pleasure to attend “Git and GitHub Training” held by Tim Berglund one of the GitHubber, so called because he is working at Github.

In the past I enjoyed very much the video Tim and Matthew McCullough, another GitHubber, did for O’Reilly

So it was really nice to meet Tim in one of his course.

What are the class objectives?

  • Understand how Git works and how to apply that to day to day development.
  • Learn how GitHub makes distributed collaboration both effective and enjoyable.
  • Practice the use of Pull Requests to make contributions to any project.
  • Learn the basic 10 commands that will appear in your every-day use of Git.
  • Know how to “back out” mistakes using Git’s incredible history and ability to revert almost any change.
  • Leverage the features of GitHub for easier collaboration with colleagues.
  • Discover how the offline capabilities of Git work.


  • Introductions
  • Git and your initial setup
  • Git configuration and its inheritance
  • SSH Authentication and your first repository
  • Understanding and thinking in Git's three stages
  • Adding, committing, and diff-ing code changes
  • The Similarity Index; Moving, Renaming, and Removing files
  • Reviewing Version History in Git
  • Strategies for Efficiency (quick workflows, GitIgnores, etc.)
  • Managing and using Git Remotes
  • GitHub
  • Forking Repos
  • Pull Requests
  • Branching, Tagging, and Stashing
  • Merging, Rebasing, and managing conflicts
  • Undoing your work with Git

The course was taking place at Canoo headquarters in Basel, which have really very nice offices and were welcoming us very nicely!

We went through all the topics described and even a bit more as most of the people in the course were already having some experience with git.

Did I liked the course, the format and the way to present it?

Definitely yes! I enjoyed it and would really recommend it to people which are starting with git. I especially liked the way Tim presented the inner working of git which I think is important to get right, especially when you come from another vcs like svn. Also that the log is a graph and not a list…

Did I learned a lot?

Not really but that was planned. I spent almost two years working with git svn at Innoveo Solutions and git for my personal projects. Now I am also using git without the svn part at work and continue to take influence so that the whole company migrate to git. So I gained some experience, which the course would have give me much quicker, and I guess my colleagues, Cédric, Carlos and Christian gained some knowledge in one day which took me much more time to grasp. So the course if a great starter, as for quite some things, you need to practice to really get it!

For people with a good understanding of git I would take care to go to the Advanced Git & GitHub Course.

During the course we had some open discussions on various topics, but here are some which I would like to put some emphasis on.

My personal experience brought me from thinking that I want to have the tools integrated into my IDE to, after getting more knowledge about git, using git from the command line. This makes me as a developer much more efficient and let me automate some stuff, which I couldn’t do with the IDE. For sure I agree with Tim Winking smile that if I would have to work with cmd it would be rude but today there is PowerShell and combined with ConEmu and PoshGit it makes very effective, even on Windows which was still seen as a clicking environment!

I learned about

git config --global credential.helper cache

But this wasn’t working on my Windows machine, so I went back to

This was also new to me

git push --delete origin mybranch

because I am used to do it with the following, which is for sure less expressive of what it does

git push origin :mybranch

Finally things I heard during the course which I want to investigate further are

I also had interesting discussions with some of the Canoo people especially about Scratch, one way to bring kids to programming and the Raspberry Pi.

So that was a great day! And I hope to have some others like this one in the future.


Git Diff Margin, display Git Diff on the margin of the current file in Visual Studio 2012

Levé du jourI am happy to announce that I finally released today my first Visual Studio 2012 extension called Git Diff Margin.

Git Diff Margin

A Visual Studio 2012 extension to display Git Diff on the margin of the current file.

  • Quickly view all current file changes on the left margin: blue rectangle for modification, green rectangle for new lines and grey triangle for deletion
  • Navigate to previous/next change on the file
  • Undo the change
  • Show the diff in external tool
  • Copy the old code into the clipboard
  • Copy a part of the old code by selecting it in the popup
  • Define the colors of the Addition, Modification and Deletion diff margin through Visual Studio Fonts and Colors options
  • Support Visual Studio 2012 Dark, Light and Blue Theme
  • Support the zoom
  • Perfect companion of Visual Studio Tools for Git


Grab it from inside of Visual Studio's Extension Manager, or via the Extension Gallery link



Get the code


Thanks to Sam Harwell @sharwell for all the improvements!

Optimizing Skye Editor using JetBrains dotTrace

WP_000092This post is a transcript of an internal post I did on Innoveo Solutions internal blog. Thanks to Innoveo to let me share this here!

Skye Editor is our metal model editor which is written in C# 4, WPF uses Model-View-ViewModel design pattern and MVVM Light.

The post shows the usage of JetBrains dotTace to optimize Skye Editor and the importance of profiling your code, here it is.

For the release 2.20 of our Skye Editor product we have done already some optimization like “FindProductValue of ModelProduct to use a dictionary”

Starting of the release 2.21 the goal was to go one step further with “Optimize Loading/Deleting of definition and update to MVVMLight 4 RTM”

The results are quite awesome!

Here I am comparing the last version of Skye Editor which we shipped, 2.18 to the version currently in development for the next release 2.21.

The performance measurement scenario is as following:

  • Starting the application
  • Loading a big definition,, 2743 Kb zip, 19928 Kb Xml
  • Deleting a brick which as lots of sub bricks and attributes, value ranges, values...

I used the profiler dotTrace from JetBrains to measure the performance improvement.

Here is a first result for the method ActualizeFromNewArchive, which is used when we load, import or activate a definition. This method is responsible of building all the View Models used in the editor which we use to display the tree of root, brick, the attributes, value range, values but also the backendinfo.. and finally the texts. So on big definition there is a lot to create especially for the texts.



So we went from 9083ms to 944ms which is around a 9.6 factor improvement as we can see on the following picture!

That's quite impressive. But where does it come from? Let dig deeper in the execution tree.



So the first improvement is due an improvement done on MVVM Light 4 RTM a library we are using from the beginning which lets us decouple our View Models / Views using some messaging mechanisms among other features. I helped it's author Laurent Bugnion to test and to improve the toolkit, he even mention us on MVVM Light 4 RTM.

We went from 2219ms to 35ms but across the whole scenario (all usage of the Register method) we went from 3072ms to 130ms, which we can see here:

The improvement there is that CleanupList is not anymore done at that moment but only when the application is idle. Clever. And what is really cool is that I mentioned that performance issue and Laurent fixed it in the next release. Thanks Laurent!

But this is not all because we have won only 3072ms which doesn't bring us from 9083ms to 944ms.

The other big improvement is the optimization of the way we find value which as radically changed.



From 5103ms to 13ms !


        public ProductValue FindProductValue(string uuid)
            return Values.AsBindingQueryable().FirstOrDefault(pv => pv.UUID == uuid);


        public ProductValue FindProductValue(string uuid)
            return Values.FindBindingByUuid(uuid);

Look more in details


        private readonly List<TBinding> _bindingList;

        public IQueryable<TBinding> AsBindingQueryable()
            return _bindingList.AsQueryable();


        public TBinding FindBindingByUuid(string uuid)
            Tuple<TBinding, TModel> value;
            _modelDictionary.Value.TryGetValue(uuid, out value);
            return value != null ? value.Item1 : default(TBinding);

The huge difference between those two methods is that the 2.18 is using a list and LINQ to find the first value which match the uuid we are searching. When the 2.21 is using a dictionary which index all values by uuid.

Another improvement of the 2.21 was to go from the following version of the method to the previously shown one:

        public TBinding FindBindingByUuid(string uuid)
            return _modelDictionary.Value.ContainsKey(uuid) ? 
                   _modelDictionary.Value[uuid].Item1 : default(TBinding);

This one make two access to the dictionary and the other only one access, which improved also quite a bit.

Another improvement is that we removed the usage of a ThreadSafeObservableCollection which was dispatching to the UI thread all operations. Basically you could operate the collection from a background thread while it was bound to the UI, which normally you cannot do due to thread affinity, except if you dispatch, which for sure as a cost.

So that's it for the improvement when we load/import/activate a definition!

Now about deleting.



It would be nice to have this gain but in fact we had to refactor the operation so that one part is executed on the UI thread on the other part on a background thread. So basically what touch to the View Model is executed into the UI thread and what touch the Model on the background thread.

So we have also to count this



So we go from 30720ms to 21666ms. Which is again a good improvement

This can be again improved a lot because currently we have to traverse the whole tree and count all the relations to the texts we want to delete which is accounting for 19468ms.
With a cache of relation it will much much faster. But that for next time!

I hope you will enjoy the time saving of all those optimizations in Skye Editor!

About Laurent

Laurent Kempé

Laurent Kempé is the editor, founder, and primary contributor of Tech Head Brothers, a French portal about Microsoft .NET technologies.

He is currently employed by Innoveo Solutions since 10/2007 as a Senior Solution Architect, certified Scrum Master and Founding Member.

Founder, owner and Managing Partner of Jobping, which provides a unique and efficient platform for connecting Microsoft skilled job seekers with employers using Microsoft technologies.

Laurent was awarded Most Valuable Professional (MVP) by Microsoft from April 2002 to April 2012.

JetBrains Academy Member
Certified ScrumMaster
My status


This is a Flickr badge showing public photos and videos from Laurent Kempé. Make your own badge here.

Month List

Page List

Laurent Kempé | Silverlight and file system loading

Silverlight and file system loading

I am currently working with Silverlight and Silverlight Streaming and had an issue over the weekend I am sure I will not be the last one to have so:

Don't try to load a Silverlight application from the file system, I mean double clicking an html file, prefer to have a little web application or even use "Starting ASP.NET Development Server from a right click in explorer" to test your application.

The security used in both cases is different so it makes the difference.

Thanks to Mathieu for the support in the development and Jon Galloway for the support in troubleshooting the issue.

Add comment

  Country flag

  • Comment
  • Preview

About Laurent

Laurent Kempé

Laurent Kempé is the editor, founder, and primary contributor of Tech Head Brothers, a French portal about Microsoft .NET technologies.

He is currently employed by Innoveo Solutions since 10/2007 as a Senior Solution Architect, certified Scrum Master and Founding Member.

Founder, owner and Managing Partner of Jobping, which provides a unique and efficient platform for connecting Microsoft skilled job seekers with employers using Microsoft technologies.

Laurent was awarded Most Valuable Professional (MVP) by Microsoft from April 2002 to April 2012.

JetBrains Academy Member
Certified ScrumMaster
My status


This is a Flickr badge showing public photos and videos from Laurent Kempé. Make your own badge here.

Month List

Page List