How to display bazaar version number in a latex document? - version-control

I am using bazaar as a version control system for my research. I have many latex documents. I want to display the version number in all the .dvi files under bazaar.

The easiest way to accomplish this will be to use make or a similar build manager to generate your .dvi files.
Your Makefile should include a new target called version-number:
version-number:
bzr revno > VERSION.tex
and your .dvi targets should depend on version-number:
my-project.dvi: my-project.tex [OTHER STUFF] version-number
In your .tex files, at an appropriate place (in the header/footer, title block, PDF metainfo, etc) you would include the version number stored in VERSION.tex:
\input{VERSION}
When you set this up you should bzr ignore VERSION.tex so that it won't store its own version number, of course.
This is all based on a similar technique used for git in the Common Lisp Quick Reference project.

Maybe the bazaar keywords plugin can help you.

I have used the latex VC package successfully with Bazaar. It works using an external script that is called during compilation by \write18. This may seem overkill but it provides a lot of functionality and works well.

Related

version control, configuration management and build combined

Could you please explain why such approach is not already existing and not widely in use?
Or if such a toolset exists, can you cite it?
Why is it that version control systems (VCS) are working on files (clearcase, svn, git e tc)? and not on units/functions?
So to track changes in a functionality one have to analyze versions of file (sometimes several files) -- as an example: if I want to analyze "Change functionality" I would get a history of that module/function and see it in one place.
If such a tool would exist... then, a software configuration tool (SCM) would put these units and/or functions together in a release configuration. Why is it, we still use Makefile, build.xml, plugin.xml e tc?
And about build: is it really necessary for a compiler to have files? if the SCM could prepare an input for a build tool and get binaries?
C/C++ for example: such a SCM could prepare the whole source in one chunk and get binary out of compiler. In case of Java: SCM could prepare .java classes and get .jar out of compiler.
Thank you.
PS: I do not search for solution for any particular problem, it is more about method. In every project same approach on source/config/build. Well with different tools, which are evolving... but there is no new approach/method to address complex systems in a different way.
if I want to analyze "Change functionality" I would get a history of that module/function and see it in one place.
If such a tool would exist
It does actually: see "Can Git really track the movement of a single function from 1 file to another? If so, how?", and its git blame -C command.
Why is it, we still use Makefile, build.xml, plugin.xml etc?
If is about declaration: you declare what you want to build, but most importantly in which order and the dependencies you need.
if the SCM could prepare an input for a build tool and get binaries?
The input for the build tool remains files, not "units/functions": the compilations tools are much more evolved and equipped to parse/analyze and extract those units, building the binary as a result.
Putting too much responsibilities in the sole SCM tool seems try to do it all, which means it will do "all" not too well, as opposed of doing one thing brilliantly.

Generating a list of include directories, and project files from a CMakeLists.txt

I'd like to use emacs to work on my project that is built using CMake, while this generally works fine, I'd like to implement better project management commands. Is there a simple way to generate some sort of file that acts as a listing of the project files.
It seems that the best way may just be some set of CMake macros that do a custom write to a file, is there perhaps any better solutions?
I have no direct experience with CMake. But there are a couple of approaches to solving this.
The canonical way is to generate a TAGS table as a part of your buid process. You will get symbol completion/navigation on top of easy access to file-list. And ctags is hyper fast. I'll leave you to google how to do that specifically, hint: wiki.
Alternatively, you can get a Emacs project management package like EDE, eproject, mk-project that defines the concept of a project. See wiki.
You can look onto CEDET mailing list - CMake support was discussed not so long time ago, and at least one person is actively working on CMake support in EDE (CEDET's project management)

Project files into VCS or not?

In our company we have a discussion whether to put project files into our Version Control System. What do you think? Consider an Eclipse project file for a C project that contain source and make files and other things. Would you put it into VCS?
If the project files meet the following criteria:
They only contain information for building the source quickly, checkout, commit and the basic routines (for developers)
Parts maybe for release can be separated from internal only (if you are a FOSS project or proprietary, for example)
They don't change anyone's IDE setup or personal preferences
They can be treated like source code for internal-only releases, and may have their own bugs and patches
I don't see a major reason why not. Makefiles/autotools defs usually go in the RCS (autotools inputs at least). Providing the data stored is relevant to all, and their machines (build output directories ...) give it a go
I'd recommend checking them in unless they contain absolute paths (some ancient IDEs like Borland C++ Builder do that), or - like Aiden Bell wrote - they contain IDE setup info.
For example: with Eclipse, .project and .classpath are safe. With Visual Studio, *.csproj and *.sln are safe (whereas *.suo is not).
Id recommend to allways check them in. It wont cost you anything, but sometimes you run into situations where you will be happy to check i.e. different settings of project files etc.
If you're using RCS to mean a general revision control system, then, yes, check source and make files in, and in general pretty much anything that you can't easily recreate from what you've got checked in.
If you're using RCS to mean rcs, then please, PLEASE upgrade to something better. SVN would be a good choice, or Git or something like that.

Do you put your development/runtime tools in the repository?

Putting development tools (compilers, IDEs, editors, ...) and runtime environments (jre, .net framework, interpreters, ...) under the version control has a couple of nice reasons. First, you can easily compile/run your program just by checking out your repository. You don't have to have anything else. Second, the triple is surely version compatible as you once tested it. However, it has its own drawbacks. The main one is the big volume of large binary files that must be put under version control system. That may cause the VCS slower and the backup process harder. What's your idea?
Tools and dependencies actually used to compile and build the project, absolutely - it is very useful if you ever have to debug an issue or develop a fix for an older version and you've moved on to newer versions that aren't quite compatible with the old ones.
IDE's & editors no - ideally you're project should be buildable from a script so these would not be necessary. The generated output should still be the same regardless of what you used to edit the source.
I include a text (and thus easily diff-able) file in every project root called "How-to-get-this-project-running" that includes any and all things necessary, including the correct .net version and service packs.
Also for proprietry IDE's (e.g. Visual Studio), there can be licensing issues as this makes it difficult to manage who is using which pieces of software.
Edit:
We also used to store batch files that automatically checked out the source code automatically (and all dependencies) in source control. Developers just check out the "Setup" folder and run the batch scripts, instead of having to search the repository for appropriate bits and pieces.
What I find is very nice and common (in .Net projects I have experience with anyway) is including any "non-default install" dependencies in a lib or dependencies folder with source control. The runtime is provided by the GAC and kind of assumed.
First, you can easily compile/run your program just by checking out your repository.
Not true: it often isn't enough to just get/copy/check out a tool, instead the tool must also be installed on the workstation.
Personally I've seen libraries and 3rd-party components in the source version control system, but not the tools.
I keep all dependencies in a folder under source control named "3rdParty". I agree that this is very convinient and you can just pull down the source and get going. This really shouldnt affect the performance of the source control.
The only real draw back is that the initial size to pull down can be fairly large. In my situation anyone who pulls downt he code usually will run it also, so it is ok. But if you expect many people to pull down the source just to read then this can be annoying.
I've seen this done in more than one place where I worked. In all cases, I've found it to be pretty convenient.

What is the easiest way to figure out who wrote/edited this line of code?

This obviously requires the source file to be under source control. I would ideally like a tool which works under the IDE (Eclipse, Visual Studio, etc) - but an external tool would be nice, too. Obviously, it is possible manually go through previous versions of the file, and compare the various versions, but I am looking for a way to be able to see quickly who is responsible for a code section.
I am using CVS, but the tool should ideally work with different source control systems.
That looks like the blame function, supported in eclipse with CVS, or with Subversion (also in eclipse)
As you mention, the eclipse-name for that feature is Show Annotations.
You don't mention wich source control are you using.
If you're using Subversion, you can take a look at:
svn blame
:)
For Visual Studio .NET with TFS.
The function is "Annotate" and works pretty much the same with Blame.
(personally I refer to these as the team's witch hunt tool).
The question is quite broad/open. Somehow, it is a good idea, it can be used as reference...
At work, I use Perforce with its graphical interface. The Time-lapse view allows to see the file with, for each line, the revision version in which it has been changed, and details (who submitted the change, when, etc.). And you can move a slider to see previous versions.
There is a command line version: p4 annotate.
I am starting to use Mercurial so I looked at it. Version control systems comparison (good site, I just discovered it) shows that the command is hg annotate.
In many version control systems including CVS, Perforce, AccuRev, Mercurial, and Team Foundation Server, the command is annotate.
In Subversion and RCS, the command is blame.
For example, with CVS:
cvs annotate foo.cc > foo_changes.txt
will create foo_changes.txt, which lists the revision number and username associated with the most recent change for each line in the current version of foo.cc. Using different options will give you the same info for previous versions or tagged versions of the file.
I needed this question answered too, but it didn't jump out at me right away when reading the answers already posted, so hopefully this summary should help.
In AccuRev this is even smarter with the annotate + "version slider" function, which will give you the option to browse through the annotated version of the file in history:
(Not only who changed what on the latest revision, but also on all revisions)
(source: accurev.com)
For perforce plugin in Eclipse annotate is not showing up in the context menu.
So I need to use: p4 annotate my-file and then using Eclipse browse the history.