Which VCS 3D modellers use? - version-control

Which VCS 3D modellers use? For instance in Blender or 3DsMax.

Like any project, it is the choice of the person starting it. Subversion and git are two popular choices, each has strong points. It would be hard to say one is more popular than the other.
There are two points I would highlight in making your decision -
Disk Usage - multimedia projects often use large files. git is a distributed repository, that means every user checking out a copy gets the entire repository, this can lead to a lot of extra disk usage for large projects. Svn keeps all the revisions on the server and each user gets two copies of each file, one original and one working so that a comparison can be made locally.
This also extends to svn being able to checkout a subdirectory of a project, while git needs to copy the entire repo. While recent git versions can checkout a subset of working files, the whole revision history is still copied locally.
Checking out previous revisions - git uses a unique string to identify each revision, while svn uses a numerical sequence. This makes svn easier to just checkout the previous revision, or five revisions earlier. To get an earlier revision from git you need to list the history and copy a random string to get an earlier revision. At least when using the CLI, GUI apps can make this easier for both.
This can extend to discussions, svn users can say I have revision 125 and I have 122 to quickly know that someone is way behind or just missing one update.

Related

Starting to version an already medium size project

I am about to start participating in the development of a medium-sized project (~50k lines) that was until now written by a single person, and not versioned; as a result folders are cluttered with different versions of the same file (named file1, file2, file3, etc.).
I proposed to start using a VCS for it (a priori Mercurial, which is the only one I've ever used -- for my personal projects --, but I'm open to suggestions), so I'm taking any good ideas as to how to "start" the repository. E.g., should I make an initial commit with all the existing files, and immediately make a new commit with the unused files removed? Or something else?
(constructive remarks on mercurial vs bazaar vs git vs whatever are also welcome.)
Thanks for your tips.
E.g., should I make an initial commit with all the existing files, and immediately make a new commit with the unused files removed?
If the size of the repository is not a concern, then yes, that is a good starting point. Otherwise you can just commit what's actually used, and go from there.
As for which system, all DVCSes stick to the same core principles. Which one you pick is entirely subjective — the only way to truly know which one you like is to try each one.
I would say use what you are the most comfortable with and meets your needs. As far as where to start, I personally would seed the repo with the current source as is, that way you can verify that everything builds and runs as expected. you can make this initial seed a branch. That way you can always go back to your starting point before refactoring.
My approach to this was:
create a Mercurial repository in the existing project folder ("existing")
commit all project files to "existing"
create an empty repository in what a different location ("new")
As files are tested and QA'd (this was necessary because there was so much dross in "existing") pull them from "everything" to "new".
Once files had been pulled into "new"; delete the corresponding files from "existing". If access is needed to these files while the migration is under way, push them back from "new" to "existing".
This gave me the advantage of putting everything under some sort of control for recovery purposes, control over introducing the project to the DVCS. Eventually the existing project folder became completely tested and approved for the project moving forward. At this point the "everything" directory could be deleted or changed into a working folder; and "new" became the actual project folder.
I think Mercurial is a good choice. Lightweight, fast, very simple to use and well-integrated with Windows (if that's the platform you're dealing with).
I would probably get rid of all the clutter before the first commit. Delete everything you don't care about, run all the necessary tests and only then do the commit.
Yes, I'm dead set against the 0-day cluttering of repos.
Granted, a 50K SLOC project isn't very big, but if you commit files you already know you won't need, they will make your repo slightly bigger.
Also, remember to check that the tree doesn't contain large binary files. If it does, get rid of them if at all possible.

revision control for many unrelated files

I'm curious to get people's thoughts on how to manage version control for unrelated functions in Matlab.
I keep a reasonably large set of general purpose scripts, each of which is more or less independent of the others. I've been keeping them all in a single directory, containing a single repository in Mercurial. I'm starting to collaborate much more, and I'd like my collaborators to be able modify the files, commit, branch, and merge.
The problem is that the files are independent of one another. Essentially, they're like many separate little projects. But Mercurial treats the repository as a single entity. So if a collaborator modifies file A and B, and I only want to merge in the changes from file A, things get complicated. I know that I could merge from the collaborator, then revert file B, but I'm wondering if there's a simpler way to handle this setup.
I could set up many tiny repositories to manage each file separately, but that also gets complicated.
I'm open to changing version control systems (although I like Mercurial a lot). Any suggestions?
It is considered a best practice to check in code after each bug fix/feature addition/or what not. Given your files are really independent "projects" it seems unlikely a bug or feature would span multiple files. Probably the best you can do is encourage your colleagues in best practices to commit changes only for a single file at once. Explain that better discipline about checking in leads to more manageable source control later. Hopefully you can get most to follow the practice and the few obstinate ones just stop taking their commits for.
It really depends on your typical reasons for merging one change but not the other. If you're using it to create a software configuration, i.e. sometimes you want to use version 1 of file A and version 2 of file B and sometimes it's the other way around, then you probably want to use subrepos to hold each file. If it's because you never want to accept part of a collaborator's change, then they need to be instructed how to make their changes more cohesive and submit them separately. That can sometimes be a difficult concept for people who either haven't used source control before, or who are accustomed to source control like svn that has little or no intrinsic concept of a changeset.
It depends whether you want to maintain a single 'master' version of the files, merging in changes that you like and ignoring others. If collaborators want to develop other branches, then they should perhaps clone the repository, and you can then accept the changesets that you want in the master.
If you want to veto changes by other collaborators, then the changes either need to be kept separate (via a cloned repository or branch) or you need a review process before changes are pushed back to the trunk.
I always use incoming repositories for collaborators. They match what the other person has made, but it avoids messing with my own repository. When you do this, you can then cherrypick their new changesets into your own repository with the transplant extension.

Working with folders in RCS

I have been following the tutorial http://www.burlingtontelecom.net/~ashawley/rcs/tutorial.html on how to work with files using RCS. This works well but only with one file. Is there a way to create an RCS file with directories as well?
I have a project folder called myproject, and in this directory I have all my files for that project. I want to create a revision control system for the myproject folder and all its files that are inside.
As William's comment says, RCS only works with single files. (It also doesn't seem to be particularly suitable for multiple-user stuff.)
Of course, nothing stops you from putting each (source) file in a directory under RCS control; in fact, this is essentially what CVS does (though in recent versions it handles the RCS data itself, rather than invoking RCS to do it as it used to do). Unfortunately, this fragments the change history rather badly; a commit affecting many files ends up as separate commits to each file, which just happen to have the same commit message (and timestamp?), and in general every file will have a different revision in what the user might like to think of as the "same" revision. (This makes tags quite essential.) CVS also has issues with the atomicity of commits: you could end up with commit A and commit B getting tangled up, such that in file foo commit A precedes commit B, but in file bar commit B precedes commit A!
SVN (Subversion) is an attempt to rectify some of the problems in CVS, though it also brings some new limitations, and keeps many of the existing ones; it is probably wiser (as William implies) to just use a distributed version control system (DVCS) for your multi-file projects. There are many choices:
Darcs uses a unique patch-based model: a repository is treated as a sequence of patches, which can be applied to an empty tree to build the current revision; patches can often be reordered by "commuting" pairs of patches, and cherry-picking patches from other repositories is quite easy. The downside is that the change history is a bit less clear than in most DVCSes. See http://wiki.darcs.net/Using/Model, http://en.wikibooks.org/wiki/Understanding_Darcs/Patch_theory.
Directed-acyclic-graph (DAG) based DVCSes model a repository as a directed acyclic graph of revisions, where each revision can have one parent, two parents, or perhaps more. Each revision has an associated file tree state; sometimes renames are also tracked somehow.
Git, as already mentioned. Has a very simple model, but a very complicated interface: there are many commands, some of which are not really intended for humans to use (owing to many parts of it having been prototyped in shell script, probably), so it can be hard to find the ones you want. Also, its model might be a bit too simple: it doesn't track renames at all.
Bazaar (a.k.a. bzr) has a more complicated model, including support for file/directory renames. It's difficult to say how much more complicated, though, because whatever documentation may exist is not nearly as accessible as Git's. It does, however, have a rather simpler user interface, and there are a number of useful plugins, including a distributed-development-friendly SVN plugin: committing from a branch back to SVN need not interfere with the validity of others' branches of your branches, and bzr metadata is even committed back to SVN. Can make things much less painful if you want to start hacking on an SVN-based project without having commit access, but hope to get your changes committed eventually. Bazaar is my personal favorite DAG-based DVCS.
Mercurial (a.k.a. hg) seems fairly similar to Bazaar, though I think it tracks renames only for individual files, not for directories. It also supports plugins, though its SVN plugin isn't as nice as Bazaar's: it doesn't support lossless commits, so branching from other peoples' branches is unwise. I don't have much experience with it, so I can't really evaluate it in-depth.
As the comments already mention, if you are starting out with version control, you would be well advised to choose a newer system than RCS (git, mercurial, fossil, subversion, ...). That said, RCS still works fine for a single developer working primarily on a single machine - I still use it for my own code because I've not yet OK worked out how to get the (20+ years of) history I want into git in the way I want it.
Anyway, to use RCS, make sure you have an RCS sub-directory in each directory where you have working source code under RCS management. The RCS files will be placed in the sub-directory automatically, and retrieved automatically. If your version of make is not already aware of RCS, then you can train it so that it is - or get a version of make that does (GNU Make, for example).
TL:DR - Look into DCVS for an alternative of RCS. It uses CVS, which uses RCS, but it's more modular for working in a repository that is distributed, as well as having a hierarchy of directories.
I'm currently going through a similar issue, and may have found something worthy of note, especially for people who are being forced to use a light, command-line based revision control systems with multiple team members.
My manager will not get off this idea of using RCS as our version control. But for the specifications, he wants developers to be able to create and edit on their own repository on a localized server within our company. Two issues with this:
RCS does not create, nor hold any sort of 'repository'. It is software that keeps track of file edits, on a Per File Basis. Meaning that the 'repository' is nothing more than another directory with RCS checked-in files. This is sub-par for team-geared projects, to say the least.
On a large project with multiple directories and tens of individual working files, even the prospect of creating a top-level RCS directory with a symbolic link in the working directories gives rise to complications such as naming conventions, as well as forgetting which file came from which bottom-level / working directory.
With what SamB posted, even CVS gives additional problems with RCS that we now have to account for, but gives us a slight ability for some additional hierarchy. But one suggestion he forgot was DCVS.
It's nothing more than an extension of CVS, CVSup, and:
contains functionality to distribute CVS repositories with local lines of development and automatically handles synchronization of the distributed repositories in the background.

Small Shop, Why DVCS?

We have a small programming shop of at most 5 people working on a single project. I fully grok why DVCS is better for open source projects, and for large companies, but what advantages does it have for smaller companies other than "you can work on the airplane." Which would require extra SA work to make sure that our repositories on DEV boxes was properly backed up every night.
We also a have several non technical people (artists, translators) who can (sort of) deal with SVN, in peoples experience how much training is required to get them to move to a DVCS?
I'm going to speak from my experience, which is primarily with SVN and Hg, often working with designers and programmers who are not comfortable with version control.
My big beef with SVN and other CVCS's I've used is that they block you from making commits, not just when the network is down, but also in case of a conflict (or worse, someone locking a file so no one else can make changes to it!). You could of course commit to a branch, but between network bandwidth required to switch branches and the pain involved in merging, you still have a problem.
Of course, SVN blocks you from committing conflicted files so that you don't accidentally overwrite someone else's work; SVN requires you to at least acknowledge that you know one version or the other (or a custom combination of the two) is right. Mercurial, however, has a better solution (2, actually):
1. You can always commit to the local repository now and merge later. (All DVCS's have this feature.)
2. Even if you pull or push conflicting changes, instead of being blocked from committing, you have multiple heads via anonymous branches. (Sorry I can't really explain this in detail here, but you can google it.)
So your workflow goes from:
1. Get the latest, make changes, test them.
2. Get the latest, resolve conflicts, test the result.
3. Commit.
and becomes:
1. Get the latest, make changes, test them.
2. Commit (so you have a place to fall back to).
3. Get the latest, resolve conflicts, test the result.
4. Commit and push.
That extra commit means you're doing less work per commit, so you have more checkpoints to fall back on. And there are other ways you can make more commits without getting in others' way.
SVN is just slow enough to break my concentration and tempt me to go to facebook; mercurial is fast and git is faster. The speed issue becomes very important when reviewing a log or changes to a working copy. With TortoiseHg, I can click through a list of files, and instantly see changes to that file; it takes about a couple seconds per file with TortoiseSVN + WinMerge (not sure how much of this is due to the DVCS). The more I use these tools, the more I feel that a VCS needs to be fast, just like a text editor or a mouse cursor--fast enough that you shouldn't require the network to do it.
Subjectively, I find TortoiseHg to be a heck of a lot easier to use than TortoiseSVN (or other tortoises I've used). TortoiseHg is mutli-platform, too. :)
One more thing: As I understand, a SVN working copy is defined recursively: Each folder is a working copy. This allows you to do some fancy-pants stuff (e.g. having a working copy that contains folders from disparate locations in the repository). I don't know if Hg has a similar feature, but in my experience, SVN's implementation of this feature causes only problems where I work, especially for those not quite comfy with SVN. When they copy-and-paste a WC folder on their machine via the OS shell instead of via svn copy, it goofs up their WC. I've goofed up my WC this way as well. It's less of a problem with Hg--you normally work with an entire repo at once, whether you clone, update, or commit.
SVN improved a lot concerning merging since its release. But it still lacks file rename tracking, often resulting in tree conflicts. Renaming is the killer app of distributed version control tackles this issue, adding some interesting links in the comment section.
DVCS lets you push to a central repository, at the cost of one additional command compared to Subversion. Occasional users should be able to adapt to this minor change in workflow. But giving the freedom of 'local' commits and branches to power users without cluttering a central repository.
Concerning tooling, which might be of importance for user acceptance, Mercurial is on par with Subversion.

How do people manage changes to common library files stored across mutiple (Mercurial) repositories?

This is perhaps not a question unique to Mercurial, but that's the SCM that I've been using most lately.
I work on multiple projects and tend to copy source code for libraries or utilities from a previous project to get a leg up on starting a new project. The problem comes in when I want to merge all the changes I made in my latest project, back into a "master" copy of those shared library files.
Since the files stored in disjoint repositories will have distinct version histories, Mercurial won't be able to perform an intelligent merge if I just copy the files back to the master repo (or even between two independent projects).
I'm looking for an easy way to preserve the change history so I can merge library files back to the master with a minimum of external record keeping (which is one of the reasons I'm using SVN less as merges require remembering when copies were made across branches).
Perhaps I need to do a bit more up-front organization of my repository to prepare for a future merge back to a common master.
Three solutions, pick your favorite:
Put all projects into one repository.
Make a separate repository for shared code and different repository for each project.
One repository with Subrepositories: https://www.mercurial-scm.org/wiki/subrepos, keep all common code in one subrepo and different subrepos for each project.
Copying actual files between repositories with no common ancestors will never be optimal as history is not preserved.
I'd recommend against your "copy the sourcecode" practice but use binary distribution for your custom libraries instead. These binaries are checked in along the sourcecode.
reduces build-time
no overhead of tracking changes in all copies of the library
you can use different versions of the same library in different projects.
EDIT: And for the issue with "common" or "toolbox" libaries in general, read this post from ayende.
use the transplant extension