Suppose I have the following (desired) folder structure:
*CommonProject
*Project#1
----> CommonProject(link)
*Project#2
----> CommonProject(link)
Where the CommonProject is the location of the source belonging to that project, and CommonProject(link) is merely a soft link to the main location. If we imagine this as a tree-view in a visual client, if I expand Project#1 I will see CommonProject there as a subdirectory, even though the files are not actually stored there.
The purpose of this is to enable the following behaviour:
When I check out Project#1 I get the files associated with that project as well as a subfolder CommonProject containing all of its files (as if Project#1 contained the copy of the files in the Version Control repository). Now if I were to modify CommonProject's files inside of Project#1 and was to submit my changes to the repository, the changes would go into the CommonProject location (no file is actually stored locally under Project#1 in the repository). Now if I was to sync Project#2, as it also contains symlink to CommonProject, it will now get my updates.
Essentially the duplication of files only exists on my machine, but in the repository there is only one version of CommonProject.
I know Perforce can’t do this, without juggling 3 specs. This is very complicated and error prone, especially when a lot of people do it. Is there a source control repository out there that can do this? (a pointer to some docs on how it can be done is a plus)
Thank you.
Subversion can directly store symlinks in the repository. This only works for operating systems that support symlinks though, as svn just stores the symlink the same way it would with any other file.
I think what you really want is to link to separate projects though. Subversion supports this through externals and git through submodules. Another alternative is to manage this sort of thing with in your build process, so that some static resources are gathered when you initialize the build. Generally, updating a utilities library that changes often is going to cause stability problems, so you can do this manually (or with clever scripts) when you need to
You'd probably be much better off just storing the projects in a flat directory (1 directory per project, all at the same level), and using whatever you build system or IDE is to link all the stuff together.
Related
First, there are no problems with my build path, and there's no problem in the "problems" window regarding this error. What happens is:
I check a project out of SVN
I then build that project
SVN "version" and link gets removed from classes and build directories (and red ! appears)
I can no longer commit project as a whole, as I get this error:
svn: Commit failed (details follow):
svn: Working copy 'C:\Users\<me>\Documents\eclipseSDK\<project>\classes' is missing or not locked
It's NOT missing, it just was "refreshed" during build
I've also tried to delete the dir on the SVN side, but the red arrows still appear and committing throws an error - even though those dirs are no longer part of the checked resources.
Also, basically, any "Team" menu option will no longer work on those two dirs, if I try to Team>Add to Version Control, nothing shows up in the Resources window.
I don't even know if I care about the red exclamation as much as just getting those two dirs out of the SVN process. Like I said, I've deleted on the SVN side even, so they are no longer there.
Are your compiled classes file part of your Subversion repository? If so, GET RID OF THEM FROM SUBVERSION!.
Sorry about the screaming, but files that can be built (like classes and jar files) should not be stored in version control. It just doesn't make sense.
Source control allows you to see the history of changes. You can see who made what changes in the source, but you can't do that in class files or jars. Source control allows you to branch and merge changes. You can't do that with class files. And, they go stale very quickly. No one cares about class files that were built last month.
The biggest issue is sheer size. Binary files don't diff well (and many version control systems don't even try). Let's say you build 15 megabytes worth of class files each time you do a build. That's probably at least 12 megabytes added to your repository each and every time. With in a year, you could be adding, 1 gigabyte to your repository. You can easily end up with the vast majority (and I mean in the 90% range of your source repository being nothing more than compiled code. Compiled code that no one cares about. Compiled code that can't help you understand your development history. Compiled code that clogs up your repository.
Go to your classes directory, and do an revert from the Team menu on that directory. You'll lose all of your changes, but so what? They shouldn't be there. This will set things up, so you can at least commit your changes.
Then, delete that classes directory and all of its contents from Subversion and commit. Eclipse will create the directory when it does a build, and you can make the directory in Ant's build file before you compile the code. You should then set an svn:ignore property on the directory parent to prevent people from checking in the classes directory.
If you need to store jar files for other projects to use, use a release repository. In fact, use a Maven repository like Artifactory or Nexus. You don't have to use Maven. Ant can use Ivy to fetch the files from a Maven repository, or you can simply use URLs to download the files as needed via wget or the <get/> task in Ant.
OK, I deleted [build & classes] from local, and deleted from SVN. Made a minor edit to a file, just to be able to run Commit - which worked fine.
That's good.
Both local & SVN were in sync. build & classes are (and were) part of ANT build file, so ran build again. build & classes reappeared (with ? this time), and added to svn:ignore.
Any file that's not in the repository will show up with a question mark. Those question marks will disappear if you add the svn:ignore property to the parent directory with classes, sync, and build in it.
Made another file change, but now I cannot commit the entire project, as I'm getting more/different errors now (e.g. "out of date").
The out of date means that someone somewhere modified a file that was in the repository and committed it. It's very likely that the change is one of the *.class files that you removed. You will have to do an update before a commit. When you do the update, you will likely see a lot of conflicts where a class file was changed, but you had removed it. Resolve these changes, update, and commit again.
By the way, I have a pre-commit hook that is useful in this situation. First, it can prevent people from adding that classes directory to the repository (and any files under it). Second it can make sure that the svn:ignore is setup correctly.
In my project I have a directory structure like this :
project
--common
--subproject
--common (junction, not a copy)
The subproject needs some files from common. Therefore I created a 'junction' (in XP using the sysinternals command) to be able to see and make changes in common from within the subproject.
This works like a charm, any changes to both commons are immediately on both common since in fact it points to the same folder.
Then I made a separate project in Eclipse for the subproject folder and its own svn repository.
But when I want to commit the subproject, it wants to commit the 'common' folder as if it were a real folder, but in fact this folder needs to be committed only to its parents repository!
Is there a way of telling svn to make commits to a different repository than its own?
svn:externals is not a good solution since this creates duplicate sources, and makes it even harder to maintain, because it's not automatically kept in sync.
eclipse's file system abstraction features linked folders.
Have you tried using those rather than a junction at the filesystem level? (I don't know how the subversion plugin treats those, but perhaps you are in luck)
I would like to know if there is any SCM that meets these criteria:
not keeping the entire depot history locally because this can be huge for some projects
not polluting the entire source tree with hidden directories (like .svn ones)
work decently with binary files, or at least to be able to limit the number of revisions to store for them (like perforce)
sync over HTTP
free
optionally, be able to link other repositories or even ones from other SCM (something like svn:externals)
(as of now unreleased) svn 1.7 does away with the .svn directory in each directory.
From http://subversion.apache.org/docs/release-notes/1.7.html#wc-ng :
A key feature of the changes introduced in Subversion 1.7 is the centralization of working copy metadata storage into a single location. Instead of a .svn directory in every directory in the working copy, Subversion 1.7 working copies have just one .svn directory—in the root of the working copy. This directory includes (among other things) an SQLite-backed database which contains all of the metadata Subversion needs for that working copy.
I've never really been bothered by the .svn dirs littered throughout - packaging or even a deploy using svn export always seemed to make them not matter when it mattered. Out of curiosity, what's making them onerous in your situation?
You should have a look at Git : http://git-scm.com/
I am looking at putting a code base that runs several website into version control. There are several instance of this code base running websites on different virtual servers.
The problem I'm grappling with is that each of these separate instances of more or less the same code have sub-directories with site-specific functions. But it seems that version control systems want to control the entire directory hierarchy.
For instance, each instance has the directory
/www/smarty/libs/plugins/
Where you'll find site-specific functions for smarty. When we are ready to put it into version control, the folder /www would be the root.
So one option is to have all the site-specific functions going out to all sites. I don't see a problem in and of itself, but it seems somehow architecturally 'wrong'. There would be a bunch of files that only belong to one deployment.
Another option is to have a separate repository for each site's specific files within the code base. But that sounds like it could quickly become a nightmare when trying to get new sites deployed properly.
What's the best way to do this? The version control system we're looking at is subversion.
Generally, source control systems should be used to control source. They are not at their best completely controlling file hierarchies, permissions, and other related things. These are best left to deployment configuration.
How about having each of the projects and directories you need represented once in the version control system. Then, in a separate directory (perhaps called /build/), have the various configuration layouts. You might have an ant file that builds each site, or maven. Or you can use tools like Capistrano or Fabric to have more control over each deployment.
The tools are made to be flexible (generally), so here are some suggestions:
Most VCS' allow you to ignore files and directories through some mechanism (e.g. Mercurial .hg ignore file), so you should be able to target what you want/should control versus what shouldn't be.
Separate the files/directories into common resource project and site-specific projects and then use a build system to integrate them to create a deployable package. The build system can be as simple as a shell script or a more sophisticated framework. If its a really simple integration, the VCS may have some basic features for merging bases (e.g. Mercurial subrespositories).
With subversion, you could have a bunch of repositories:
www be in a general repository
plugins each be in a site-specific repository
Then have nested working copies:
svn co http://www_repo www
cd www/smarty/libs
svn co http://foo_plugins_repo plugins
Tip: add plugins to svn:ignore property of www/smarty/libs
svn propset svn:ignore "plugins" www/smarty/libs
You could certainly do that with git too (through .gitignore), and probably with other version control systems but I don't know them.
(Alternatively you could skip the nested working copy part (which can freak some people out) and check out stuff side by side, but use a symlink in lieu of smarty/libs/plugins, while ignore still pertains)
You're missing a "build" step, which whould take the source in source control and create the deployment bundles for the different sites. Only one source package is needed, different build configurations create the different deployment packages. Don't try to directly put the deplyoment set into source control, it is not the source!
I believe the best thing to do would be to create a top level directory in your repository for each site (Site-01, Site-02, etc) and inside those directories put the source tree. Then you can checkout the projects separately. I think it's acceptable and somewhat standard to use the same repository for all the projects your company is involved with.
My terminology might be off kilter, but the fundamental idea is sound, I believe.
I am working on a project that depends on external programs, and needs to know the paths to them. I develop and use the project on several machines, using mercurial for version control. The paths are machine-dependent, so I keep them in a machine-specific config file. I would like the config file for each host to be version-controlled, but I need to ensure that the config file from one host would never overwrite the config file for another host when pushing or pulling between hosts. Is there any way to accomplish this?
In principle, Wim is right: machine specific configurations shouldn't be part of the project's source control. As long as you walk alone, this isn't a real problem, but once you want to provide generic releases of your project, you have to get rid of them. In that case you might not be happy about the fact, that the change history contains files with machine specific data.
Nevertheless, it may make sense to have machine specific data in version controlled files (personally I do this for my dot-rc files and shell scripts). In that case I would suggest to separate generic and specific configurations into different files and include/utilize the specific one at build- or runtime, depending on the currently used machine.
If it is not possible to detect the current machine automatically, you could still create an unversioned symbolic link on each machine, pointing to the appropriate specific configuration file. For instance, on the machine foo the file layout could look like this:
generic.conf version-controlled
specific-foo.conf version-controlled
specific-bar.conf version-controlled
specific.conf → specific-foo.conf unversioned symbolic link
An alternative to symbolic links is to use a hook which automatically creates specific.conf, e.g. on each invocation of hg update. As hooks are set in a repository's hgrc file, it can be defined individually on each machine. Here's an example of a corresponding hooks section in the .hg/hgrc file of a repository clone on the machine foo:
[hooks]
update = cp specific-foo.conf specific.conf
Machine specific configuration settings should not be version controlled in the same repository as the project code.
However, it is still a good idea to put an inactive sample configuration file in your code repository. And this sample could show a bunch of typical locations for the external program paths you mentioned as lines that are commented out. That way you make it easier to get your project running on new machines.