What artifacts to save for a released build? - deployment

So, I now know what to save from nightly builds. What about when I give something to customers?
For example, I probably want to save debugging information (e.g. PDB).
What else?

We use:
installers
binaries
pdbs
tag of source files
any other source files that might not be in svn - for example config.status
build log
You made me wonder if I'm missing anything important

Compiler and library version information (it may not be part of the build log). Somebody else mentioned the whole binaries.
Linker map file (it can sometimes help the remote debugging of a problem).
Unstripped executable (if on a Unix system you strip it the executable before making it available to clients).

For the SDK releases we do include:
PDB and XML for the libraries (packaged with the latest snapshot of the samples)
Packaged snapshot of sources from SVN (just because we can)
Link to the online documentation (docs are generated from the source automatically)

Trace messages don't necessarily need to be generated by default but the possibility to enable them can be very helpful.

Results and Information generated from ATPs that are run on the build (probably as part of the build process).

Related

Error when run ClickToBuild.bat file

I downloaded the orchard galley server from codeplex repository in this link:
https://galleryserver.codeplex.com
when I want to run ClickToBuild.bat file according of the Readme.txt file for build and run all of the tests in both projects, show this error:
any one can help me?!
thanks a lot
First of all, the project has been discontinued. So you probably will not have support from the author. second, its from 2011.
Looking at your errors, it might seem you are missing assets being referenced. If you look closely to the project tree and read the Error you will see how there are Nuget packages being used. those might need updating for your project to build.
If it is an old nuget version, depending on your VS version might have problems but it salvageable. Just need to make sure all your packages are updated before building.
If you look at UpdateNuGet.bat you will be able to see how it tries to do something in the folder where the compiler tries to find the asset.
Good luck

Can't Compile Tesseract API example for WIndows using Tesseract 3.0.2.02 archive

I'm looking at using Tesseract to do some work with PDF files, and so I want to use the library rather than an external executable.
I started by downloading the full Tesseract source and looking at building that. Sadly the standard sources don't have any means to build on a non-Linux platform, in my case Windows. There are methods for doing so, and I looked at those.
Firstly the VS2008 build doesn't. I'm aware that it need Leptonica, but I figured I'd tackle that afterwards and just tried to build the existing code. Fails with "fatal error C1083: Cannot open include file: 'allheaders.h': No such file or directory". Nothing to do with Leptonica at this stage, it simply doesn't work.
Even if I were able to get past that, I'd have to build Leptonica, and that requires using GNU tools and therefore an installation of Cygwin, so I gave up. I Have a MingW instatllation, (I've never managed to get Cygwin to work in a usable fashion) but I'm not keen enough to mess with such a complicated and fragile build.
So I decided I'd just use the pre-built binaries which some kind soul creates. Downloaded that from code.google.com. Now I need to look into using the code, so the next obvious step is the Tesseract API example, which states it requires "tesseract-ocr-3.02.02-win32-lib-include-dirs.zip", no problem, because I already have that now.
No real clue as to where the API example wants the files to be placed, but a little messing about gets them in appropriate locations. Press build and "fatal error C1083: Cannot open include file: 'allheaders.h': No such file or directory", just like trying to build Tesseract from source.....
And indeed there is no such file.
So, where is this file ?
I also struggled some time ago to make it works under windows and then I found this git repository : https://github.com/charlesw/tesseract-vs2012
It includes all needed extern library (because Tesseract need Leptonica, but Leptonica also need extern library to handle the different image format) and is also working great with vs 2013.
OK so now I see that allheaders.h is part of Leptonica. Still leaves me wondering why the Tesseract pre-built library requires that I have Leptonica available, I would have expected that to be built-in, I guess it isn't.

Why are my Eclipse project builds so slow?

We use Eclipse (Indigo, with STS). Certain of our projects take inordinately long to build. Often the progress indicator sticks on, say, 87%, for 30 seconds.
I'm trying to find out what Eclipse is spending it's time on during the build cycle. I hope to be able to optimize the build or disable components that are causing it to be so slow. I'd like to see a log file saying ("compiling java code", "processing resources", etc).
I've poked around the log files in the .metadata directory. I've looked on the Eclipse site for tips. I've tried using "-debug" when starting Eclipse. I still can't find the information I'm looking for.
Is there any way to get Eclipse to spit out a log of what activities it is spending its time on when it builds a project?
What kind of projects are these? Java? Dynamic Web? Two things to look at for hints about what's going on are in the project Properties dialog; look at the Builders section and the Validation section. Try disabling the validations to see if that makes a difference in your build times.
To get some insight into what's happening at the times when the build seems to hang, try setting the -debug and -consoleLog options, as described here.
Disable your virus scanner software for your workspace and project directories. I increased the speed of my build in that way.
You can go to edit Windows->preference->general->workspace->build order to edit the default that exist according to your project need.
And check the maximum number of iteration when building with cycle.
I hope it works.
Since eclipse is a Java application, the usual debugging tools are at your disposal. In particular, you might try connecting to eclipse with JConsole and inspect the thread dump taken when the build "hangs", or run eclipse within a profiler.
You might find out things like a validator trying to download an xml schema, and waiting for the timeout since eclipse is not configured to use the corpoate proxy server - something which is very hard to find out by other means ;-)
Look into Apache Ant build scripts. Eclipse has support to auto generate them as a starting point instead of coding the whole thing by hand. The shop I worked in used tuned ANT scripts to optimize and control build order. We then piped output to log files using shell scripts.
You can try and replace with this aapt . My build for a particular project went from 3 minutes to 41 seconds....
This is an old post but thought of sharing my solution. I was using eclipse Luna and I noted that when you keep on working on a GIT branch without checking into git over the time the build becomes very slow. In my case I just deleted the folder .git and the file .gitignore and the build was very fast. Please note that this will disconnect eclipse from git, therefore use this aproach only if you know how to connect back to git branch using git commands.

Storing third-party framework/middleware into source control that needs to alter your compiler/IDE

I know there are posts that ask how one stores third-party libraries into source control (such as this and this). While those are great answers, I still can't find the answer to this:
How do you store third-party middleware/frameworks binaries that need to alter your compiler / IDE for the library to work properly? Note: for my needs, I don't need to store the middleware source, I only store header files / lib / JAR ..so that it's ready to be linked.
Typically, you simply link libraries to your app, and you are good. But what about middleware / frameworks that need more?
Specific examples:
Qt moc pre-processor.
ZeroC Ice Slice (ice) compiler (similar to CORBA IDL preprocessor).
Basically these frameworks/middleware need to generate their own code before your application can link to it.
From the point of view of the developer, ideally he wants to just checkout, and everything should be ready to go. But then my IDE/compiler will not be setup properly yet, so the compilation will fail..
What do you think?
Backup everything including the setup of the IDE, operating system, etc. This is what i do
1) Store all 3rd party libraries in source control. I have a branch for all the libraries.
2) Backup the entire tool chain which was used to build. This includes every tool. Each tool is installed into the same directory on each developers computer, so this makes it simple to setup a developers machine remotely.
3) This is the most hardcore, but prepare 1 perfect developer IDE setup which is clean, then make a VMWare / VirtualPC image out of it. This will be useful when you cant seem to get the installers to work in future.
I learned this lesson the painful way because I often have to wade through visual studio 6 code which don't build properly.
I think that a better solution is to make sure that the build is self-contained and downloads all necessary software for itself unless you tell it otherwise. This is the way maven works, and it is really handy. The downside is that it sometimes needs to download a application server or similar, which is highly unpractical, but at least the build succeeds and it becomes the new developers responsibility to improve the build if needed.
This does of course not work great if your software needs attended installs, but I would try to avoid any such dependencies in any case. You can add alternative routes (e.g the ant script compiles the code if eclipse hasn't done it yet). If this is not feasible, an alternative option is to fail with a clear indication of what went wrong (e.g 'CORBA_COMPILER_HOME' not set, please set and try again').
All that said, the most complete solution is of course to ship everything with your app (i.e OS, IDE, the works), but I doubt that that is applicable in the general case, how would you feel about that type of requirements to build a software product? It also limits people who want to adapt your software to new platforms.
What about adding 1 step.
A nant script which is started with a bat file. The developer would only have to execute one .bat file, the bat file could start nant, and the Nant script could be made to do anything you need.
This is actually a pretty subtle question. You're talking about how to manage features of the environment which are necessary in order to allow your build to proceed. In this case it's the top level of your code toolchain, but the problem can be generalised to include the entire toolchain, and even key aspects of the operating system.
In my place of work, we have various requirements of the underlying operating system before our code will successfully run. This includes machine-specific configurations as well as ensuring correct versions of system libraries and language runtimes are present. We've dealt with this by maintaining a standard generic build machine image which contains the toolchain requirements we need. We can push this out to a virgin machine and get a basic environment that contains the complete toolchain and any auxiliary programs.
We then use fsvs to version control any additional configuration, which can be layered on to specific groups of machines as needed.
Finally, we use custom scripts hooked in to our CI server (we use Hudson) to perform any pre-processing steps required for specific projects.
The main advantages for us of this approach is:
We can build and deploy developer and production machines very easily (and have IT handle this side of the problem).
We can easily replace failed machines.
We have a known environment for testing (we install everything to a simulated 'production server' before going live).
We (the software team) version control critical configuration details and any explicit pre-processing steps.
I would outsource the task of building the midleware to a specialized build server and only include the binary output as regular 3rd party dependencies under source control.
If this strategy can be successfully applied depends on whether all developers need to be able to change midleware code and recompile it frequently. But this issue could also be solved via a Continous Integration Server like Teamcity that allows to create private builds.
Your build process would look like the following:
Middleware repo containing middleware code
Build server, building middleware
Push middleware build output to project repository as 3rd party references
Update: This doesn't really answer how to modify the IDE. It's just a sort-of Maven replacement thingy for C++/Python/Java. You shouldn't need to modify the IDE to build stuff, if so, you need a different IDE or a system that generates/modifies IDE files for you. (See CMake for a cross-platform c/c++ project file generator.)
I've written a system (first in Ant/Beanshell at two different places, then rewrote it in Python at my current job) where third-partys are compiled separately (by someone), stored and shared via HTTP.
Somewhat hurried description follows:
Upon start, the build system looks through all modules in repo, executes each module's setup target, which downloads the specific version of a third-party lib or app that the current code revision uses. These are then unzipped, PATH/INCLUDE etc are added to (or, for small libs, copy them to a single directory for the current repo) and then launches Visual Studio with /useenv.
Each module's file check for stuff that it needs, and if it needs installing and licensing, such as Visual Studio, Matlab or Maya, that must be on the local computer. If that's not there, the cmd-file will fail with a nice error message. This way, you can also check that the correct version is in there
So there are a number of directories on the local disk involved. %work% needs to be set using an global environment variable, preferrable on a different disk than system or source-checkout, at least if doing heavy C++.
%work% <- local store for all temp files, unzip, and for each working copy's temp files
%work%/_cache <- downloaded zips (2 gb)
%work%/_local <- local zips (for development or retrieved in other manners while travvelling)
%work%/_unzip <- unzips of files in _cache (10 gb)
%work%&_content <- textures/3d models and other big files (syncronized manually, this is 5 gb today, not suitable for VC either)
%work%/D_trunk/ <- store for working copy checked out to d:/trunk
%work%/E_branches/v2 <- store for working copy checked out to e:/branches/v2
So, if trunk uses Boost 1.37 and branches/v2 uses 1.39, both boost-1.39 and boost-1.37 reside in /_cache/ (as zips) and /_unzip/ (as raw files).
When starting visual studio using bat files from d:/trunk/BuildSystem/Visual Studio.cmd, INCLUDE points to /_unzip/boost-1.37, while if runnig e:/branches/v2/BuildSystem/Visual Studio.cmd, INCLUDE points to /_unzip/boost-1.39.
In the repo, only a small set of bootstrap binaries need to be stored (i.e. wget and 7z).
We currently download about 2 gb of packed data, which is unzipped to 10 gb (pdb files are huge!), so keeping this out of source control is essential. Having this system allows us to keep the repo size small enough to use DVCS such as Mercurial (or Git) instead of SVN, which is very nice. (I'm thinking of using Mercurials bigfiles extension or file sharing instead of a separately http-served directory.)
It work flawlessly. Developers need only to check out, set an enviroment variable for their local cache, then run Visual Studio via a specific batch-file in the repo. No unzipping or compiling or stuff. A new developer can set up his computer in no time. (Installing Visual Studio takes the order of a magnitude more time.)
First time on a new computer takes some time, but then it's fast, only a few seconds. Downloads/unzips are shared on the local computer, do checking out additional branches/versions does not occupy more space. Working offline is also possible, you just need to get the zip files manually if new ones have been uploaded. (This mechanism is essential to test new versions/compilations of third-party libraries.)
The basics are in a repo on bitbucket but it needs more work before it's ready for the public. Apart from doc and polish, I plan to:
extend it to use cmake instead of raw
vcproj-files, to make it more
cross-platform.
script the entire
process from checkout/download of
third-party packages to building and
zipping them (including storing the
download in a local repo) ... currently that's on my dev computer. Not good. Will fix. :)
As for moc, we use Qt's Visual Studio add-in, which stores this in the .vcproj files. Works well. I do think that CMake is one of the best answers for this though

Carbide does not include debugging info for some files

In Carbide 2.0.2, if I set active configuration to be "Phone Debug GCCE", build the project, go to Debug perspective, choose "Executables" tab, and select the executable file, Source File Name/Location window will list all the files I am able to use while debugging.
The problem is that the list does not contain all files from the project, even though their code is successfully linked and executed on a device. As a consequence of the issue, I am not able to set break points in these files.
What is the catch and how can I fix it?
Thank you.
This is a problem with the version of GCCE that is used by default with Symbian. It has a number of bugs with debug information, including sometimes missing line information for some files.
The alternatives are (a) the commercial RVCT compiler, or (b) follow the in-progress work to move to a newer GCCE compiler. A good start for that is here:
http://developer.symbian.org/wiki/index.php/The_GCCE_toolchain_initiative