Is using GACUtil in your coding/svn/development workflow considered Bad Practice? - version-control

There's plenty of information/blogs/msdn articles around on NOT using GACUtil in your Deployment/Release scenarios and that MSI or another windows installer technology is a far better option.
However is it still appropriate to use GACUtil in your Development work flow.
We have a number of DLLs that are strong named & referenced from the GAC. In order to keep the development team in sync, once a new version of the GAC-able DLL is generated it's automatically added to all other developers GAC's as part of their daily trunk checkout. Workflow goes something like:
A Developers makes a change to one of our GAC-able assemblies, tests it locally, and once signed off, compiles a release version of the DLL
Release version is copied from \Project_DIR\bin\Release*.dll -> \COMPANY_GAC\Current*.dll
Other devs run our Source Control check out batch scripts which:
Check out newest versions of COMPANY_GAC\Current*.dll
Run GacUtil.exe on each DLL
This has worked for us up until now, but it's getting a little more complex with:
- Larger Team, more stringent management of GAC Changes.
- CLR2.0 & CLR4.0 compiled Company_Gac assemblies requiring different versions of GACUtil.exe
- Managing assemblies on Build/Integration Servers which have multiple feature branches (and hence having to hot-swap different GAC Dlls)
Should we be looking at something more robust that GACUtil & Scripts to manage this?
One consideration was to roll something ourselves in powershell to check the Assembly type and add the assemblies to the correct GAC. Has anyone done this?
Any other suggestions on how developers manage their GAC workflow would be welcome.

Not using gacutil.exe during deployment is an easy one: it isn't available on the target machine since it is a Windows SDK utility and it is not a re-distributable component.
Using it during development certainly isn't popular. Most typically you'd use a solution with the dependent projects included so that you'll automatically get the latest build with local deployment and no need for the GAC. That goes well up to a point, build times can require starting distributing swords when the solution gets too massive.
No magic solutions past that point, the GAC certainly helps to get build times down again. In general, churn in the foundation assemblies should start with minus 1000 points, they can cause a lot of pain. Save them up for only, say, weekly release updates. Off hand, there's also the core need to get all this stuff properly installed on the client machines. If nobody has focused on that yet, maybe now is a good time to get that solid. Which automatically gets debugged when everybody uses it to get the assemblies they need on their machine.

Related

How to reference a specific DLL for functionality in said DLL

Good day,
I have an application that I developed that transfers files between two machines ("site" and "server"). This application was set to target dotNet 3.5. Furthermore, I am using Renci.SshNet to handle the connections between the machines and the transferring of said files.
The issue that I am facing currently though is that about 70% of the "site" machines do not have a standard dotNet and is also quite old; thus these machines do not support all the required functionality as the external dll makes calls to System.Threading.WaitHandle.WaitOne() and System.Threading.WaitHandle.WaitAny(WaitHandle[], Int32) and other overloads of these methods.
The workaround that I have for this though is to install netfx20SP2 or netfx30SP1, yet I am not in the position to perform this update on all machines as they are scattered across the country and have data limitations (bandwidth and cap).
What I want to do possibly is to embed the System.Threading dll that I have downloaded and then the application should use those classes instead, or alternatively just point the application to use the said dll.
Is this at all possible, or do you have to load the dll into the GAC? And also, will it be possible to "run" this higher version of System.Threading in the application while the system itself is on a lower framework version. Something is telling me that the best bet will be to actually run the service pack installation to avoid unnecessary coding but I'm not sure exactly how to approach this.
Thank you in advance for any assistance / suggestions,
JD
To allow the execution of an application that, let's say, targets .Net 4; while the machine itself only has let's say, .Net 3.5, installed, one can redirect Windows to check the local (executing) directory for dlls that should contain the required symbols loaded into memory instead of the default symbols that get loaded upon execution (the default would be the NetFx installed on the machine - which I believe the highest version of the framework that can be found upon loading when the execution starts or would be the highest available version that is lower or equal to the targeted framework).
This file's contents (myApp.exe.local) are ignored. It is just there to tell Windows to
look in that folder for the applicable symbols and if not found, the system will roll back to attempt to load these symbols from the NetFx directory.
Read more at Microsoft Dev Center - Docs (link is attached to the following paragraph which is a Copy-Paste of a section of this document).
To use DLL redirection, create a redirection file for your application. The redirection file must be named as follows: App_name.local. For example, if the application name is Editor.exe, the redirection file should be named Editor.exe.local. You must install the .local file in the application directory. You must also install the DLLs in the application directory.

Nuget - store packages in source control, or not?

We currently don't use nuget for our dependencies, preferring to go old-skool way and stick them all in a libs folder and reference from there. I know. So 1990's.
Anyway, nuget has always made me feel a bit queasy... you know, reliance on the cloud and all that. As such, I'm find myself in the main agreeing with Mark Seeman (see here: http://blog.ploeh.dk/2014/01/29/nuget-package-restore-considered-harmful/) who says:
Personally, I always disable the feature and instead check in all packages in my repositories. This never gives me any problems.
Trouble is, this has changed in version 3, you can't store packages alongside the solution, as outlined here: https://oren.codes/2016/02/08/project-json-all-the-things/. Which sorta screws up checking them into source code.
So, am I worrying about nothing here? Should I drink from the nuget well, or side with Mr Seeman and er on the side of caution?
Storing NuGet packages in source control is a really, really bad idea.
I accidentally did it once and I ended up bloating my source code considerably, and that was before .NET Core...
Drink deep from the NuGet well. Most software components are packaged in a similar way these days (NPM, Bower etc). The referenced blog post is two years old and package management is changing rapidly in the .NET world, so here's some of my experience lately.
NuGet packages can't be deleted from nuget.org. They can be hidden,
but if your application requests a hidden package it will download it
as normal. It'll never disappear into the void.
'Enable Package Restore' is no longer glitchy because it's now a default option in NuGet 2.7+. You have no choice anymore.
Packages are no longer stored per solution but per machine, which will save a ton of bandwidth and will decrease the initial fetch period when building.
If you build a new project using .NET Core, you will have dozens more packages as the entire BCL will be available as NuGet packages. Do you really want to check-in all the System.* packages into source code?
There is a very simple reason why you want to store Nuget packages in source control. Your organization doesn't want your build server to have internet access.

Incremental Build with MSBuild.exe

I'm building a Visual Studio 2010 solution through Python with a call to subprocess. When called directly from the command line it takes devenv.com ~15 seconds to start. But when called from Python this jumps up to ~1.5 minutes.
Naturally I'm hoping to remove that dead time from our build. So I decided to test out MSBuild.exe (from .NET 4). It looks like MSBuild.exe runs instantly. But... it seems to do a full build every time and not an incremental.
The command I'm using is
"C:\Windows\Microsoft.NET\Framework\v4.0.30319\MSBuild.exe" "C:\path\to\my\project.sln" /target:build /maxcpucount:8 /property:Configuration=Release
It seems like this should support an incremental build. But I've seen posts online indicating that msbuild may not be able to support a incremental build like this.
Is this possible? If so what am I doing wrong?
Update:
I've read into this a bit more. Based on
http://msdn.microsoft.com/en-us/library/ms171483.aspx
and
http://www.digitallycreated.net/Blog/67/incremental-builds-in-msbuild-and-how-to-avoid-breaking-them
It seems like I need the Input and Output properties set in my .vcxproj files. Checking out my files these are indeed missing.
When would they be generated? Most my .vcxproj files were converted over from Visual Studio 2008. But I also generated a new project which is missing the Input and Output properties as well.
Does VS2010 not create projects with these properties?
Update: We've since upgrade to VS 2013. Now msbuild supports incremental builds. Never got to the bottom of the VS 2010 issue.
I think that fact that Incremental builds are not supported is a false Statement from according to official sources,Managed Incremental Build this feature and was included in VS2010 SP1
We first introduced the managed incremental build feature in VS2008.
In VS2010, we were not able to re-implement the managed incremental
build feature with the build system moving to MSBuild. We received
strong customer requests for this feature. As a result, we
re-implemented this feature and it is included in VS2010 SP1.
Other Solutions I found on Web
Projects should build incrementally already (just make sure that you
do Build instead of Rebuild). The best way to check if incremental
building works is to run the build from the command line. The second
time you build it should take almost no time.
If things are still getting rebuilt, then perhaps you've modified
your projects in some way that's messing up with the build order.
Looking at the build logs (via the /v option) can help you poinpoint
what's going on.
Other reason which can cause problems with the incremental build is GenerateResource.TrackFileAccess PropertyThis API supports the .NET Framework infrastructure and is not intended to be used directly from your code.
Gets or sets a switch that specifies whether we should be tracking file access patterns.

NUnit vs MSTest - a fickle TDD novice's experiences with both of them

There are a ton of questions here on SO regarding NUnit vs. MSTest, and I have read quite a few of them. I think my question here is slightly different enough to post separately.
When I started to use C#, I never even considered looking at MSTest because I was so used to not having it available when I was using C++ previously. I basically forgot all about it. :) So I started with NUnit, and loved it. Tests were very easy to set up, and testing wasn't too painful -- just launch the IDE and run the tests!
As many here have pointed out, NUnit has frequent updates, while MSTest is only updated as often as the IDE. That's not necessarily a problem if you don't need to be on the bleeding edge of TDD (which I'm not), but the problem I was having with frequent updates is keeping all of the systems up-to-date. I use about four or five different PCs daily, and while updating all of them isn't a huge deal, I was hoping for a way to make my code compile properly on systems with an older version of NUnit. Since my project referenced the NUnit install folder, when I upgraded the framework, any computers with the older framework installed would no longer be able to compile my project. I tried to combat the problem by created a common folder in SVN that had just the NUnit DLLs, but even then it would somehow complain about the version number of the binary. Is there a way to get around this issue? This is what made me stop using the first time.
Then one day I remembered MSTest, and decided to give it a try. I loved that it was integrated into the IDE. CTRL-R,CTRL-A, all tests run. How simple! But then I saw that the types of tests available in MSTest were pretty limited. I didn't know how many I'd actually really need, but I figured I should go back to NUnit, and I did.
About now I was starting to have to debug unit tests, and the only way I could figure out how to do it in NUnit was to set NUnit as the startup application, then set breakpoints in my tests. Then in the NUnit GUI, I would run the tests to hit the breakpoints. This was a complete PITA. I then looked at the MSTest GUI again, and saw that I could just click Debug there and it would execute my tests! WOW! Now that was the killer feature that swayed me back in favor of MSTest.
Right now, I'm back using MSTest. Unfortunately, today I started to think about daily builds and did some searching on Tinderbox, which is the only tool I had heard of before for this sort of thing. This then opened up my eyes to other tools like buildbot and TFS. So the problem here is that I think MSTest is guaranteed to lock me into TFS for automated daily builds, or continuous integration, or whatever the buzzword is. My company can't afford to get locked into MS-only solutions (other than VS), so I want to examine other choices.
I'm perfectly fine to go back to NUnit. I'm not thrilled about rewriting 100+ unit tests, but that's the way it goes. However, I'd really love for someone to explain how to squash those two issues of mine, which in summary are:
how do I setup NUnit and my project so that I don't have to keep upgrading it on every system to make my project build?
how do I get easier debugging of unit tests? My approach was a pain because I'd have to keep switching between NUnit and the default app to test / run my application. I saw a post here on SO that mentioned NUnitIt on codeplex, but I haven't any experience with it.
UPDATE -- I'm comparing stuff in my development VM, and so far, NUnitit is quite nice. It's easy to install (one click), and I just point it to whatever NUnit binaries are in my SVN externals folder. Not bad! I also went into VS -> Tools -> Options -> Keyboard and changed my mapping for CTRL-R,CTRL-A to map to NUnitit.Connect.DebugGUI. Not perfect since I haven't figured out how to make NUnit automatically run the tests when it's opened, but it's pretty good. And debugging works as it should now!
UPDATE #2 -- I installed TestDriven.Net and gave it a quick run through. Overall, I like it a lot better than NUnitit, but at the moment, NUnitit wins because it's free, and since it also works with NUnit, it will allow me to "upgrade" to TestDriven.Net when the time comes. The thing I like most about TestDriven.Net is that when I double click on the failed test, it takes me right to the line in the test that had failed, while NUnit + NUnitit doesn't seem to be capable of this. Has anyone been able to make this link between the NUnit GUI and the VS IDE happen?
Many projects I've worked on have included a copy of the specific version of NUnit (or xUnit.net, whatever) in a "lib" or "extrernal" or "libraries" folder in their source control, and reference that location for building all of their tests. This greatly reduces the "upgrade everyone" headache, since you really don't need to install NUnit or xUnit.net to use it.
This approach will still let you use something like TestDriven.Net to execute the tests, run the tests in a debugger, etc.
For easier debugging (and running, too) of unit tests I recommend checking out TestDriven.Net. The "Test With > Debugger" feature is so handy. The personal version is free.
Have you played with the "Specific Version" property on the NUnit.framework reference? We keep ours set to true so that the tests that are coded for a given nunit version require that specific version to execute.
I'm not sure how it will handle, for example, if you had 2.5 on your machine but another machine only had 2.4 - would .NET bind to the 2.4 version happily or will it only bind from earlier versions to later versions of an assembly (e.g. compiled against 2.4, but 2.5 availale at runtime?)

Storing third-party framework/middleware into source control that needs to alter your compiler/IDE

I know there are posts that ask how one stores third-party libraries into source control (such as this and this). While those are great answers, I still can't find the answer to this:
How do you store third-party middleware/frameworks binaries that need to alter your compiler / IDE for the library to work properly? Note: for my needs, I don't need to store the middleware source, I only store header files / lib / JAR ..so that it's ready to be linked.
Typically, you simply link libraries to your app, and you are good. But what about middleware / frameworks that need more?
Specific examples:
Qt moc pre-processor.
ZeroC Ice Slice (ice) compiler (similar to CORBA IDL preprocessor).
Basically these frameworks/middleware need to generate their own code before your application can link to it.
From the point of view of the developer, ideally he wants to just checkout, and everything should be ready to go. But then my IDE/compiler will not be setup properly yet, so the compilation will fail..
What do you think?
Backup everything including the setup of the IDE, operating system, etc. This is what i do
1) Store all 3rd party libraries in source control. I have a branch for all the libraries.
2) Backup the entire tool chain which was used to build. This includes every tool. Each tool is installed into the same directory on each developers computer, so this makes it simple to setup a developers machine remotely.
3) This is the most hardcore, but prepare 1 perfect developer IDE setup which is clean, then make a VMWare / VirtualPC image out of it. This will be useful when you cant seem to get the installers to work in future.
I learned this lesson the painful way because I often have to wade through visual studio 6 code which don't build properly.
I think that a better solution is to make sure that the build is self-contained and downloads all necessary software for itself unless you tell it otherwise. This is the way maven works, and it is really handy. The downside is that it sometimes needs to download a application server or similar, which is highly unpractical, but at least the build succeeds and it becomes the new developers responsibility to improve the build if needed.
This does of course not work great if your software needs attended installs, but I would try to avoid any such dependencies in any case. You can add alternative routes (e.g the ant script compiles the code if eclipse hasn't done it yet). If this is not feasible, an alternative option is to fail with a clear indication of what went wrong (e.g 'CORBA_COMPILER_HOME' not set, please set and try again').
All that said, the most complete solution is of course to ship everything with your app (i.e OS, IDE, the works), but I doubt that that is applicable in the general case, how would you feel about that type of requirements to build a software product? It also limits people who want to adapt your software to new platforms.
What about adding 1 step.
A nant script which is started with a bat file. The developer would only have to execute one .bat file, the bat file could start nant, and the Nant script could be made to do anything you need.
This is actually a pretty subtle question. You're talking about how to manage features of the environment which are necessary in order to allow your build to proceed. In this case it's the top level of your code toolchain, but the problem can be generalised to include the entire toolchain, and even key aspects of the operating system.
In my place of work, we have various requirements of the underlying operating system before our code will successfully run. This includes machine-specific configurations as well as ensuring correct versions of system libraries and language runtimes are present. We've dealt with this by maintaining a standard generic build machine image which contains the toolchain requirements we need. We can push this out to a virgin machine and get a basic environment that contains the complete toolchain and any auxiliary programs.
We then use fsvs to version control any additional configuration, which can be layered on to specific groups of machines as needed.
Finally, we use custom scripts hooked in to our CI server (we use Hudson) to perform any pre-processing steps required for specific projects.
The main advantages for us of this approach is:
We can build and deploy developer and production machines very easily (and have IT handle this side of the problem).
We can easily replace failed machines.
We have a known environment for testing (we install everything to a simulated 'production server' before going live).
We (the software team) version control critical configuration details and any explicit pre-processing steps.
I would outsource the task of building the midleware to a specialized build server and only include the binary output as regular 3rd party dependencies under source control.
If this strategy can be successfully applied depends on whether all developers need to be able to change midleware code and recompile it frequently. But this issue could also be solved via a Continous Integration Server like Teamcity that allows to create private builds.
Your build process would look like the following:
Middleware repo containing middleware code
Build server, building middleware
Push middleware build output to project repository as 3rd party references
Update: This doesn't really answer how to modify the IDE. It's just a sort-of Maven replacement thingy for C++/Python/Java. You shouldn't need to modify the IDE to build stuff, if so, you need a different IDE or a system that generates/modifies IDE files for you. (See CMake for a cross-platform c/c++ project file generator.)
I've written a system (first in Ant/Beanshell at two different places, then rewrote it in Python at my current job) where third-partys are compiled separately (by someone), stored and shared via HTTP.
Somewhat hurried description follows:
Upon start, the build system looks through all modules in repo, executes each module's setup target, which downloads the specific version of a third-party lib or app that the current code revision uses. These are then unzipped, PATH/INCLUDE etc are added to (or, for small libs, copy them to a single directory for the current repo) and then launches Visual Studio with /useenv.
Each module's file check for stuff that it needs, and if it needs installing and licensing, such as Visual Studio, Matlab or Maya, that must be on the local computer. If that's not there, the cmd-file will fail with a nice error message. This way, you can also check that the correct version is in there
So there are a number of directories on the local disk involved. %work% needs to be set using an global environment variable, preferrable on a different disk than system or source-checkout, at least if doing heavy C++.
%work% <- local store for all temp files, unzip, and for each working copy's temp files
%work%/_cache <- downloaded zips (2 gb)
%work%/_local <- local zips (for development or retrieved in other manners while travvelling)
%work%/_unzip <- unzips of files in _cache (10 gb)
%work%&_content <- textures/3d models and other big files (syncronized manually, this is 5 gb today, not suitable for VC either)
%work%/D_trunk/ <- store for working copy checked out to d:/trunk
%work%/E_branches/v2 <- store for working copy checked out to e:/branches/v2
So, if trunk uses Boost 1.37 and branches/v2 uses 1.39, both boost-1.39 and boost-1.37 reside in /_cache/ (as zips) and /_unzip/ (as raw files).
When starting visual studio using bat files from d:/trunk/BuildSystem/Visual Studio.cmd, INCLUDE points to /_unzip/boost-1.37, while if runnig e:/branches/v2/BuildSystem/Visual Studio.cmd, INCLUDE points to /_unzip/boost-1.39.
In the repo, only a small set of bootstrap binaries need to be stored (i.e. wget and 7z).
We currently download about 2 gb of packed data, which is unzipped to 10 gb (pdb files are huge!), so keeping this out of source control is essential. Having this system allows us to keep the repo size small enough to use DVCS such as Mercurial (or Git) instead of SVN, which is very nice. (I'm thinking of using Mercurials bigfiles extension or file sharing instead of a separately http-served directory.)
It work flawlessly. Developers need only to check out, set an enviroment variable for their local cache, then run Visual Studio via a specific batch-file in the repo. No unzipping or compiling or stuff. A new developer can set up his computer in no time. (Installing Visual Studio takes the order of a magnitude more time.)
First time on a new computer takes some time, but then it's fast, only a few seconds. Downloads/unzips are shared on the local computer, do checking out additional branches/versions does not occupy more space. Working offline is also possible, you just need to get the zip files manually if new ones have been uploaded. (This mechanism is essential to test new versions/compilations of third-party libraries.)
The basics are in a repo on bitbucket but it needs more work before it's ready for the public. Apart from doc and polish, I plan to:
extend it to use cmake instead of raw
vcproj-files, to make it more
cross-platform.
script the entire
process from checkout/download of
third-party packages to building and
zipping them (including storing the
download in a local repo) ... currently that's on my dev computer. Not good. Will fix. :)
As for moc, we use Qt's Visual Studio add-in, which stores this in the .vcproj files. Works well. I do think that CMake is one of the best answers for this though