Making Core internal libraries accessible and maintainable – Part 2

In part 1, we looked at an example of a CoreLib_C being used in Application_A.  They were linked with relative pathed project references.

This created a few critical points of failure that become inevitable with any evolving codebase.  So what are the potential solutions to this simple disaster?

The first approach I saw (note: had no part in voting for and/or implementing) was to create custom build tasks to facilitate the CoreLib_C being built by MSBuild and it’s drop point being fileshared out.  Then an internal tool to “quick map” the a common drive letter to that build version’s UNC.

It may sound at first tempting, but dare not be fooled by this trickery!  It is el diablo in disquise.  Firstmost, by having a custom MSBuild task to create the fileshare from the drop point, you are 100% coupled to the build server itself.  Meaning the drop point must be on the build server because the host OS that executes the filesharing task isn’t remotely executing on a separate server.  This proved to fail miserably the day our VMWare cluster crashed (that build server was hosted on that cluster).

The trailing effects of this: updating references in Application_A to use the fileshare.  Now you are still coupling the developer machine to have the internal tool and be mapped to the correct distribution of the CoreLib_C (many times devs are pointing to the wrong version and get tons of compile errors or magic bugs).  As if that wasn’t enough, this method also cripples the build system.  In order to facilitate the developer project references to a mapped drive, the build scripts must also dynamically map the same drive letter to whatever “version” fileshare the build requires.

This means the build script needs it defined as a variable and only one build service can be executing on a single server at any time (otherwise the drive letter mapping would be competing).  It is a mess.  It breaks inevitably.

So what is the second approach?

The much cleaner approach is to utilize source control systems to thier potential.  When we think of a 3rd party library that we may purchase and use in an application, how would we retain it?  It usually gets added in a folder “3rd Party References” and that gets added to the solution’s directory structure.  Finally, it is all added into source control.  Why should this be any different despite it being internally developed?  It still meets the same principal concepts as a 3rd party package.

The Plan

  1. CoreLib_C source control changes
  2. MSBuild task / workflow activity requirements
  3. CoreLib_C workflow changes
  4. Pre-Build event for Application_A

Source Control Changes

The highlighted directory is added to source control at the root solution level.  “Deploy” is going to be a repository where the drop folder will copy its contents to and commit to source control.  The end result is each time CoreLib_C is built by the build server, it will compile and check its output into the Deploy folder.  This is essentially the same structure as NuGet uses for adding packages to Visual Studio projects.




MSBuild task / Workflow activity

If you are using TFS 2010, then a custom workflow activity is ideal.  Regardless of build system, whatever you are using for continuous integration should be extendable to support custom build steps.  After the build is finished, the binary output should be copied into the Deploy folder and checked in.  For TFS build systems, the TFS api is used and the checkin comment of **NO_CI** is required.  This prevents circular CI triggers from the auto-checkin in the build step.

CoreLib_C Workflow changes

The build script needs to incorporate the newly created build step or activity.  And it can be tricky to test.  I recommend starting with a small code base that compiles in a few seconds.

Prebuild event for Application_A

This is the final piece to the puzzle.  Once the library is compiled and checking its updated binaries back into source control, then you are ready to start “consuming” it in Application_A.  A prebuild event can be used it you want it automated.  Or even manually pulling it and updating your working folder and commiting changes to the “External” folder in a changeset.  Either are acceptable and depend on the cycle your team likes.


Disclaimer: NuGet was not around when I started this article (I was slow posting this one…)  Since its release, I am in full support of setting up an internal NuGet server and packaging enterprise / shared libraries to the internal server for other developers to pull down and install the dependencies.


Making Core internal libraries accessible and maintainable – Part 1

There are always design challenges around long term maintenance of code from start to finish.  It is almost always overlooked and quickly discarded as something “not worth the time” on an already rushed development timeframe for a “business critical bug or feature” that needs to get done.

We all know the following (we = programmers):

  1. Everything, no matter what, will always somehow be critical and necessary and top priority to business groups.
  2. Procrastination of upfront design leads to things similar to the fall of the Roman Empire.

There are infinite issues that can arise from poorly planned and often overlooked items such as: source control structure, solution/project structure and file naming.  Down stream things can get hectic throughout the ALM on its way to production.

To illustrate, here’s a random issue that 99% of the time would never be planned for:

Application A:

Code and Build definitions reside in Team Project 1
Repository Path: $/Team Project 1/Application A/Main/

CoreLib C:

Code and Build definitions reside in Team Project 2
Repository Path: $/Team Project 2/CoreLib C/Main/

The more I delve into the world of source control systems and workspaces I discover that it is an avoided topic amongst most developers.  “Ignorance is bliss and if it works magically: I don’t care” is the typical attitude.  Lets assume a developer adds a project reference into Application A from a csproj in CoreLib C via local file system created by the Get from source control.

The workspace local path essentially doesn’t matter (most cases it shouldn’t matter).  However, it can if its treated as magic.  The project reference ultimately becomes a relative path reference to the other csproj.  If my root $ was mapped to D:\Dev it would look like this:

D:\Dev\Team Project 1\Application A\Main\Code\MyApplication\MyApplication.Business.Entities\MyApplication.Business.Entities.csproj (130 characters)
D:\Dev\Team Project 2\CoreLib C\Main\Code\CoreLib.Domain.Entities\CoreLib.Domain.Entities.csproj (96 characters)

The reference inside MyApplication.Business.Entities.csproj would relative-path to:

../../../../../../Team Project 2/CoreLib C/Main/Code/CoreLib.Domain.Entities/CoreLib.Domain.Entities.csproj (107 characters)

The path lengths don’t seem that long to the developer.  However, the build agent hates it.  When MSBuild (yes – even in 2010 workflows MSBuild is still used to do the actual compilation) evaluates a project for dependencies to determine it’s build order, it combines the two full paths.

Lets state the build server is configured to make its workspace paths off of (a very typical setting): D:\Builds\$(BuildDefinitionPath)

When it starts getting sources it will create its working directory the build definition path variable becomes “Team Project Name + \ + Build Definition Name”.  Finally it will add its four standard subfolders of: Binaries, BuildType, Sources, TestResults.

To the left, you can see how the build agents local file system would look like.  Now, the MSBuild evaluation issue… concatenate those paths together from the eyes of the build agent:

D:\Dev\Team Project 1\Application A\Main\Code\MyApplication\MyApplication.Business.Entities\MyApplication.Business.Entities.csproj/../../../../../../Team Project 2/CoreLib C/Main/Code/CoreLib.Domain.Entities/CoreLib.Domain.Entities.csproj (238 characters)

This already dangerously close to the 259 character limitation in Windows operating systems.  And most projects will have more folders underneath the project level to give more namespacing and structure.

Initial issues with all of this should be brainstormed as: “Why are we naming the CSPROJ the namespace? Why are the folders named the namespace?”  Those simple things that aren’t preached much would eliminate alot of waste in valuable character limits.

But of course, the most important: “Why am I adding a project reference to CoreLib C at all?”  It’s a shared/enterprise level library… It should be treated as an internal product that other software uses.

I will cover this in part 2.

Under the hood of a Gated Checkin

After squashing a few “hey this seems broken” statements after introducing Gated checkins, like most good programmers: I refuse to do anything twice… instead I will automate/document/write a batch file/etc.

Misconception: “Its not associating to any work items or changesets? Something isn’t right…”

Well, of course.  It’s not supposed to.  I naturally had to elaborate on this.

The process of which a Gated checkin takes place is quite simple and putting too much thought into Microsoft’s automagically pipeline will tend to give it the appearance of overcomplexion.

The changeset simply does not exist.  It won’t exist.  That is the sole purpose of the Gated checkin.  It creates a private shelveset automatically based on the developer’s “changeset” it intercepted.  TeamBuild will then queue the shelveset build.

At this point the build workflow begins as normal except one extra step at the beginning… after Getting Sources, it will merge the shelveset and then continue to let MSBuild compile.  At the end of the workflow, there is one final inserted step as well (I’m sure you see where this is going): If successful (compile and/or unit tests) it will merge the shelveset into the repository.

The reconciliation phase then begins…  if it failed, you pull the changes back down and correct it.  If it passed and was merged, you will reconcile your workspace (if you didnt preserve changes on the checkin) and your team members will be notified to reconcile as well (assuming you are using Build Notification via Power Tools).

At the end of this process, if everything passes and the shelveset is accepted, a changeset is finally generated and finally associated to the “now-public” gated build.  It essentially replaces the CI build.

The nightly build will still pickup that changeset and perform associations as well.

Gated Checkin Flowchart