Storing third-party framework/middleware into source control that needs to alter your compiler/IDE - version-control

I know there are posts that ask how one stores third-party libraries into source control (such as this and this). While those are great answers, I still can't find the answer to this:
How do you store third-party middleware/frameworks binaries that need to alter your compiler / IDE for the library to work properly? Note: for my needs, I don't need to store the middleware source, I only store header files / lib / JAR ..so that it's ready to be linked.
Typically, you simply link libraries to your app, and you are good. But what about middleware / frameworks that need more?
Specific examples:
Qt moc pre-processor.
ZeroC Ice Slice (ice) compiler (similar to CORBA IDL preprocessor).
Basically these frameworks/middleware need to generate their own code before your application can link to it.
From the point of view of the developer, ideally he wants to just checkout, and everything should be ready to go. But then my IDE/compiler will not be setup properly yet, so the compilation will fail..
What do you think?

Backup everything including the setup of the IDE, operating system, etc. This is what i do
1) Store all 3rd party libraries in source control. I have a branch for all the libraries.
2) Backup the entire tool chain which was used to build. This includes every tool. Each tool is installed into the same directory on each developers computer, so this makes it simple to setup a developers machine remotely.
3) This is the most hardcore, but prepare 1 perfect developer IDE setup which is clean, then make a VMWare / VirtualPC image out of it. This will be useful when you cant seem to get the installers to work in future.
I learned this lesson the painful way because I often have to wade through visual studio 6 code which don't build properly.

I think that a better solution is to make sure that the build is self-contained and downloads all necessary software for itself unless you tell it otherwise. This is the way maven works, and it is really handy. The downside is that it sometimes needs to download a application server or similar, which is highly unpractical, but at least the build succeeds and it becomes the new developers responsibility to improve the build if needed.
This does of course not work great if your software needs attended installs, but I would try to avoid any such dependencies in any case. You can add alternative routes (e.g the ant script compiles the code if eclipse hasn't done it yet). If this is not feasible, an alternative option is to fail with a clear indication of what went wrong (e.g 'CORBA_COMPILER_HOME' not set, please set and try again').
All that said, the most complete solution is of course to ship everything with your app (i.e OS, IDE, the works), but I doubt that that is applicable in the general case, how would you feel about that type of requirements to build a software product? It also limits people who want to adapt your software to new platforms.

What about adding 1 step.
A nant script which is started with a bat file. The developer would only have to execute one .bat file, the bat file could start nant, and the Nant script could be made to do anything you need.

This is actually a pretty subtle question. You're talking about how to manage features of the environment which are necessary in order to allow your build to proceed. In this case it's the top level of your code toolchain, but the problem can be generalised to include the entire toolchain, and even key aspects of the operating system.
In my place of work, we have various requirements of the underlying operating system before our code will successfully run. This includes machine-specific configurations as well as ensuring correct versions of system libraries and language runtimes are present. We've dealt with this by maintaining a standard generic build machine image which contains the toolchain requirements we need. We can push this out to a virgin machine and get a basic environment that contains the complete toolchain and any auxiliary programs.
We then use fsvs to version control any additional configuration, which can be layered on to specific groups of machines as needed.
Finally, we use custom scripts hooked in to our CI server (we use Hudson) to perform any pre-processing steps required for specific projects.
The main advantages for us of this approach is:
We can build and deploy developer and production machines very easily (and have IT handle this side of the problem).
We can easily replace failed machines.
We have a known environment for testing (we install everything to a simulated 'production server' before going live).
We (the software team) version control critical configuration details and any explicit pre-processing steps.

I would outsource the task of building the midleware to a specialized build server and only include the binary output as regular 3rd party dependencies under source control.
If this strategy can be successfully applied depends on whether all developers need to be able to change midleware code and recompile it frequently. But this issue could also be solved via a Continous Integration Server like Teamcity that allows to create private builds.
Your build process would look like the following:
Middleware repo containing middleware code
Build server, building middleware
Push middleware build output to project repository as 3rd party references

Update: This doesn't really answer how to modify the IDE. It's just a sort-of Maven replacement thingy for C++/Python/Java. You shouldn't need to modify the IDE to build stuff, if so, you need a different IDE or a system that generates/modifies IDE files for you. (See CMake for a cross-platform c/c++ project file generator.)
I've written a system (first in Ant/Beanshell at two different places, then rewrote it in Python at my current job) where third-partys are compiled separately (by someone), stored and shared via HTTP.
Somewhat hurried description follows:
Upon start, the build system looks through all modules in repo, executes each module's setup target, which downloads the specific version of a third-party lib or app that the current code revision uses. These are then unzipped, PATH/INCLUDE etc are added to (or, for small libs, copy them to a single directory for the current repo) and then launches Visual Studio with /useenv.
Each module's file check for stuff that it needs, and if it needs installing and licensing, such as Visual Studio, Matlab or Maya, that must be on the local computer. If that's not there, the cmd-file will fail with a nice error message. This way, you can also check that the correct version is in there
So there are a number of directories on the local disk involved. %work% needs to be set using an global environment variable, preferrable on a different disk than system or source-checkout, at least if doing heavy C++.
%work% <- local store for all temp files, unzip, and for each working copy's temp files
%work%/_cache <- downloaded zips (2 gb)
%work%/_local <- local zips (for development or retrieved in other manners while travvelling)
%work%/_unzip <- unzips of files in _cache (10 gb)
%work%&_content <- textures/3d models and other big files (syncronized manually, this is 5 gb today, not suitable for VC either)
%work%/D_trunk/ <- store for working copy checked out to d:/trunk
%work%/E_branches/v2 <- store for working copy checked out to e:/branches/v2
So, if trunk uses Boost 1.37 and branches/v2 uses 1.39, both boost-1.39 and boost-1.37 reside in /_cache/ (as zips) and /_unzip/ (as raw files).
When starting visual studio using bat files from d:/trunk/BuildSystem/Visual Studio.cmd, INCLUDE points to /_unzip/boost-1.37, while if runnig e:/branches/v2/BuildSystem/Visual Studio.cmd, INCLUDE points to /_unzip/boost-1.39.
In the repo, only a small set of bootstrap binaries need to be stored (i.e. wget and 7z).
We currently download about 2 gb of packed data, which is unzipped to 10 gb (pdb files are huge!), so keeping this out of source control is essential. Having this system allows us to keep the repo size small enough to use DVCS such as Mercurial (or Git) instead of SVN, which is very nice. (I'm thinking of using Mercurials bigfiles extension or file sharing instead of a separately http-served directory.)
It work flawlessly. Developers need only to check out, set an enviroment variable for their local cache, then run Visual Studio via a specific batch-file in the repo. No unzipping or compiling or stuff. A new developer can set up his computer in no time. (Installing Visual Studio takes the order of a magnitude more time.)
First time on a new computer takes some time, but then it's fast, only a few seconds. Downloads/unzips are shared on the local computer, do checking out additional branches/versions does not occupy more space. Working offline is also possible, you just need to get the zip files manually if new ones have been uploaded. (This mechanism is essential to test new versions/compilations of third-party libraries.)
The basics are in a repo on bitbucket but it needs more work before it's ready for the public. Apart from doc and polish, I plan to:
extend it to use cmake instead of raw
vcproj-files, to make it more
cross-platform.
script the entire
process from checkout/download of
third-party packages to building and
zipping them (including storing the
download in a local repo) ... currently that's on my dev computer. Not good. Will fix. :)
As for moc, we use Qt's Visual Studio add-in, which stores this in the .vcproj files. Works well. I do think that CMake is one of the best answers for this though

Related

How to reference a specific DLL for functionality in said DLL

Good day,
I have an application that I developed that transfers files between two machines ("site" and "server"). This application was set to target dotNet 3.5. Furthermore, I am using Renci.SshNet to handle the connections between the machines and the transferring of said files.
The issue that I am facing currently though is that about 70% of the "site" machines do not have a standard dotNet and is also quite old; thus these machines do not support all the required functionality as the external dll makes calls to System.Threading.WaitHandle.WaitOne() and System.Threading.WaitHandle.WaitAny(WaitHandle[], Int32) and other overloads of these methods.
The workaround that I have for this though is to install netfx20SP2 or netfx30SP1, yet I am not in the position to perform this update on all machines as they are scattered across the country and have data limitations (bandwidth and cap).
What I want to do possibly is to embed the System.Threading dll that I have downloaded and then the application should use those classes instead, or alternatively just point the application to use the said dll.
Is this at all possible, or do you have to load the dll into the GAC? And also, will it be possible to "run" this higher version of System.Threading in the application while the system itself is on a lower framework version. Something is telling me that the best bet will be to actually run the service pack installation to avoid unnecessary coding but I'm not sure exactly how to approach this.
Thank you in advance for any assistance / suggestions,
JD
To allow the execution of an application that, let's say, targets .Net 4; while the machine itself only has let's say, .Net 3.5, installed, one can redirect Windows to check the local (executing) directory for dlls that should contain the required symbols loaded into memory instead of the default symbols that get loaded upon execution (the default would be the NetFx installed on the machine - which I believe the highest version of the framework that can be found upon loading when the execution starts or would be the highest available version that is lower or equal to the targeted framework).
This file's contents (myApp.exe.local) are ignored. It is just there to tell Windows to
look in that folder for the applicable symbols and if not found, the system will roll back to attempt to load these symbols from the NetFx directory.
Read more at Microsoft Dev Center - Docs (link is attached to the following paragraph which is a Copy-Paste of a section of this document).
To use DLL redirection, create a redirection file for your application. The redirection file must be named as follows: App_name.local. For example, if the application name is Editor.exe, the redirection file should be named Editor.exe.local. You must install the .local file in the application directory. You must also install the DLLs in the application directory.

Is using GACUtil in your coding/svn/development workflow considered Bad Practice?

There's plenty of information/blogs/msdn articles around on NOT using GACUtil in your Deployment/Release scenarios and that MSI or another windows installer technology is a far better option.
However is it still appropriate to use GACUtil in your Development work flow.
We have a number of DLLs that are strong named & referenced from the GAC. In order to keep the development team in sync, once a new version of the GAC-able DLL is generated it's automatically added to all other developers GAC's as part of their daily trunk checkout. Workflow goes something like:
A Developers makes a change to one of our GAC-able assemblies, tests it locally, and once signed off, compiles a release version of the DLL
Release version is copied from \Project_DIR\bin\Release*.dll -> \COMPANY_GAC\Current*.dll
Other devs run our Source Control check out batch scripts which:
Check out newest versions of COMPANY_GAC\Current*.dll
Run GacUtil.exe on each DLL
This has worked for us up until now, but it's getting a little more complex with:
- Larger Team, more stringent management of GAC Changes.
- CLR2.0 & CLR4.0 compiled Company_Gac assemblies requiring different versions of GACUtil.exe
- Managing assemblies on Build/Integration Servers which have multiple feature branches (and hence having to hot-swap different GAC Dlls)
Should we be looking at something more robust that GACUtil & Scripts to manage this?
One consideration was to roll something ourselves in powershell to check the Assembly type and add the assemblies to the correct GAC. Has anyone done this?
Any other suggestions on how developers manage their GAC workflow would be welcome.
Not using gacutil.exe during deployment is an easy one: it isn't available on the target machine since it is a Windows SDK utility and it is not a re-distributable component.
Using it during development certainly isn't popular. Most typically you'd use a solution with the dependent projects included so that you'll automatically get the latest build with local deployment and no need for the GAC. That goes well up to a point, build times can require starting distributing swords when the solution gets too massive.
No magic solutions past that point, the GAC certainly helps to get build times down again. In general, churn in the foundation assemblies should start with minus 1000 points, they can cause a lot of pain. Save them up for only, say, weekly release updates. Off hand, there's also the core need to get all this stuff properly installed on the client machines. If nobody has focused on that yet, maybe now is a good time to get that solid. Which automatically gets debugged when everybody uses it to get the assemblies they need on their machine.

Best option for check-in/out with small team using Visual Studio 2012?

I have a small team of web developers who work together on up to 50 external sites. I am trying to find a better solution to using Dreamweaver's check-in check-out for managing source. We have just started using Visual Studio 2012 here and there and I am curious if TFS is the way to go for us. No one here has ever used versioning or any type of source control before, so I am looking for something similar to what they are used to.
If it matters at all, our sites are all hosted on a Windows 2008 R2 server, and largely written in C#.
I think TFS is a good option to consider. As several people have commented, it will be a jump from what you are your team are used to in Dreamweaver, but I personally feel if you are serious about managing your intellectual property, you will invest in some sort of version control system. With that said, there will be a learning curve regardless whether you are your team select TFS, SVN, Git, etc.
Assuming you do go with TFS, you do get the added benefit of everything else that comes with TFS - it's not just about version control. This includes work item tracking, automated builds/deployments, reports, a simple SharePoint site, etc.
With TFS you get the benefit of all of these features, combined into a single product. You can accomplish a similar setup using open source products as well, but would require you to piece the products together.
I'd use the integrated Subversion client in Dreamweaver, which does the basic stuff very nicely and doesn't require the tedious navigation process that will lead to your team bypassing the system. Only problem, DW does not support the latest versions of SVN so you need to pick up an SVN server that is compatible. Try this:
Setting Up Version Control for Dreamweaver CS6 on Windows
Any previous attempts to get version control working may well have created some .svn folders and files on your PC. You MUST remove ALL of these and UNINSTALL ALL OTHER VARIETIES of Subversion software from your PC before you start.
Go to the VisualSVN Server website and download an archived standard version of their software, version 2.1.16 . Don’t be tempted to grab a later version, because this will install SVN 1.7 or 1.8 and neither will work with Dreamweaver.
http://www.visualsvn.com/server/changes/
Trying to get DW working direct to a local folder using the file:// protocol probably won’t work and is also known to put data at risk. You need the server. I chose to install the VisualSVN server with the default settings, other than opting to use Windows logins and go with HTTP, not HTTPS. I decided to have the repositories live on an internal SSD drive, but any local drive will do. When creating a folder for your repositories to live in, use a name that is pretty general e.g. ourcorepositories . I used lower case for everything.
Right click on ‘Repositories’ to create a new one. Give it a name without any spaces or special characters e.g. mynewprojectrepo and check to ‘Create default structure’ . Before you OK, note the Repository URL and copy it into Notepad or a similar plain text editor so you can refer to it later during 6 below. It will be something like
http://OFFICEDESKTOP/svn/mynewprojectrepo
Notice that the capitalised part of the URL is the name of your computer. Click OK and you now have a repository for your project.
5. Boot DW and go to your project. If you don’t have a project yet, create one and stick some dummy files and folders in it. Go to Site menu>>Manage sites… and 2-click your project. Select Version Control.
6. Set Access to be ‘Subversion’ (no other choices exist), Protocol to be HTTP and for the Server Address enter the name of your computer in lower case e.g.
officedesktop
For the Repository Path enter (e.g., using current example from 4. Above)
/svn/mynewprojectrepo
The Server Port should be 80 . For the Username enter your Windows user name, in lower case. Enter your Windows password for the Password. This is the name and password combo that you use to log in to your PC . Click the Test button and you should get a success message. If not, the best advice is to delete any .svn files and repositories you have created and start again. Be sure not to add any slashes or omit any; the above works. Before you click Save, click the link to the Adobe Subversion resources and bookmark it in your browser. There is a lot of useful background information there. Click Save, click Done.
7. Go to your DW project and open up Local View. All of your site’s files and folders will have a green + sign beside the icon. Right-click on the site folder and click ‘Version control>>Commit” . It is a very good idea to leave comments whenever you change anything, so leave a Commit Message along the lines of “The initial commit for My New Project” and click to Commit. If you have a lot of files to go to the repository, they’ll take some time to upload. As they upload, the green + signs disappear to show that you local version is in synch with the repo.
8. Okay, that’s it, you have Version Control in Dreamweaver CS6. It may also work in CS5 and 5.5. Check out those Adobe resources for some good insights on workflow. I can’t help with any other ways to implement version control, but I can maybe save you time by saying that DW doesn’t integrate with Git and that the basic, but integrated, Subversion client in Dreamweaver is way better than having no version control. For coverage against physical disaster, I’d also add in a scheduled daily backup of your entire repositories folder to some cloud storage.
Apologies for any errors. I’d recheck all of the steps, but A) I think they’ll get you up and running and B) it’s easier to do the install and set up the first time than the second time (all those .svn files and folders to get rid of).

How to deploy: database, source and binary changes in 1 patch?

I'm part of a development team that works on many CMS based projects, using systems like Joomla and Drupal.
In our development process, all of our code changes are managed inside of Git. At the end of a sprint, we create a DIFF that we can apply via patch to live site.
The problem is that most of the time, the changes include
Database Schema Changes
Database Data Changes
Source Code changes
Binary file changes (like images)
Git Diff handles Source Code changes beautifully. Binary files are only not included in the Diff except for reference to the fact that the files have changed.
Database Schema Changes and Database Data Changes are a mess.
I was wandering if anything like an unified patch system exists that could be used to deploy all of these changes in 1 patch.
So the question is, "Is there a system that can be used to deploy all of these changes in 1 shot?
Ideally, this system would allow to run dry-run like patch, but for all of the 4 data types.
Edit:
Thank you everyone for the feedback that you provided, it was a starting point for my research in this area.
Here is what I found so far:
It's difficult to deploy php based
applications using linux packaging
system because the changes to the
project happen iteratively rather
then as releases.
It would be possible to use dbconfig to deploy changes to a
project, but the problem is
generating mysql db diffs (schema
and data)
what really is missing for deployment of php based applications
is a deployment manager that would
be installed on the server and would
be the interface for deploying the
patches
I started a Google Wave on this topic and produced a lot of information as a result.
If anyone is interested in reading this wave, please let me know and I will add you.
For handling installation and upgrade of our application, we use the debian packaging system . ( .deb package )
Context :
We are making J2EE + Flex application. Shipping and administred throught a VPN.
So not so far from you.
Fresh install and upgrade for a version to another are made through puppet ( a system for automating system administration tasks : he install our .deb )
In the .deb we have
our compiled sourcecode
the schema of the database ( handled by [db-config][1] )
binary stuff
how to install throught apt all other application needed ( mysql, tomcat ... )
= All stuff for a fresh install
We also add the info to go from a version to another
the script for upgrading the database ( for each version )
new binary
new stuff to lauch at the machine start ( eg : some weeks ago we have add a activeMQ server )
=> Once the .deb is made correctly, we can install or upgrade seamless in one operation. ( it's made automatically, without any prompt ).
Theire is one .deb per realease, each .deb has a version number and a signature.
You can pick any of our .deb and make a fresh install or upgrade from the actual version to the version number he hold.
The .deb is in our continous integration system. ( we build a .deb each hour, like if we are about to realease a new version )
What are the benefit ?
Install / upgrade automaticcally, with confidence.
Rollback a version
run dry are natively supported
In your precise case
* Database Schema Changes
* Database Data Changes
* Source Code changes
* Binary file changes (like images)
Database => you will have to write migration script. One for each version. ( ex : 1.2-update.sql 1.3-update.sql )
Source code and binary => add them, say in witch version they have to be copied/use
Edit : i'm not sure about source code. We are doing that with compiled code...
Some links to start :
https://wiki.ubuntu.com/PackagingGuide/Complete
http://www.debian.org/doc/manuals/maint-guide/index.fr.html#contents ( in french )
[1]: http://pwet.fr/man/linux/formats/dbconfig dbconfig
[1]: http://www.debian.org/doc/FAQ/ch-pkg_basics.en.html debian
I don't think you'll find a fail-safe mechanism.
I recommend that, when possible, you take into account compatibility with the current published source when making schema/data changes.
This way you can make a v. simple tool that runs database scripts committed to a particular svn location (you don't want diff on database changes, as if you need further modifications you need different statements).
With the above done, you can have a simple command that runs the database changes, then the binary & source code changes.
For database there is also the option of schema&data comparisons tools, these could be used to compare environments & make sure there isn't anything unexpected missing in the change scripts - could also generate the change scripts, but as I said you really want to make sure it won't break current source.
You can create a tool to do the migrations painlessly -- something similar to Peoplesoft's Patch Upgrade Assistant.
It is basically a standalone executable that reads an "Upgrade Template" and carries out tasks. The upgrade template declaratively describes the upgrade tasks or "steps". The steps could be - copy (for backing up or moving the precompiled objects like classes and othar binaries), database (for altering schema elements), SQL Scripts (for loading or transforming current data). The steps will have some predicate logic capable - if it is this, do this, else skip it and go to next etc.
The template is usually an XML file. It also provides for manual steps with instructions for manual actions. Each step also specifies if it is recoverable or not. It would also validate if the step has succeeded or not.
It may be possible to have a Open Source project around this requirement which is quite common.
You need to save git commit objects in local file and then import them into other repo/branch.

What artifacts to save for a released build?

So, I now know what to save from nightly builds. What about when I give something to customers?
For example, I probably want to save debugging information (e.g. PDB).
What else?
We use:
installers
binaries
pdbs
tag of source files
any other source files that might not be in svn - for example config.status
build log
You made me wonder if I'm missing anything important
Compiler and library version information (it may not be part of the build log). Somebody else mentioned the whole binaries.
Linker map file (it can sometimes help the remote debugging of a problem).
Unstripped executable (if on a Unix system you strip it the executable before making it available to clients).
For the SDK releases we do include:
PDB and XML for the libraries (packaged with the latest snapshot of the samples)
Packaged snapshot of sources from SVN (just because we can)
Link to the online documentation (docs are generated from the source automatically)
Trace messages don't necessarily need to be generated by default but the possibility to enable them can be very helpful.
Results and Information generated from ATPs that are run on the build (probably as part of the build process).