Capistrano 3 - best practice for a task which symlinks release path to shared dir - capistrano

I've been using Capistrano 3 today and am wondering what the best way would be to symlink my release path to my shared directory. This is the opposite way round from what I've done before with Capistrano 2 - it's really because I only have a small set of files versioned in this case, leaving the bulk of the project files unversioned in my shared directory. I think it makes sense in this case to do it this way - any advice otherwise would be welcome if it's likely to be an issue.
I'm also wondering how a symlink like this, pointing latest release path to shared directory, should be handled in cap3? Thanks for any pointers.

Related

Is there a way to prevent some files from appearing on the main branch in the remote repo?

I'm working on a simple project with other people. They use Eclipse to build it, but I don't like Eclipse and wrote a makefile and some batch/bash scripts to do the job for me.
I want to keep track of changes I make to these files, but I don't want others to see them in the main repo (at least not on the default branch, it would be okay to have my own). I could make a subrepo, but I don't want to type the folder each time I build something (besides, keeping makefile NOT in the root would be a bit awkward).
What are my options?
Use MQ-extension
Have adding these needed (personal local) files in MQ-patch(es)
Work locally with patch applied, unapply patch before push

How can I add a directory tree to my github repo?

I've been working on a project that's fairly far a long now and I decided it's time to use some sort of version control etc. I decided to go with github. Before I get in too deep let me state explicitly that I am new to github.
My project resides in a directory that contains myriad subdirectories and files of all different kinds. I'd like to take my project directory as is (structure and all) and put it in my github repo.
I followed the tutorials on github's webpage, created the repo, and manually added some files. Obviously I don't want to manually add every file (there are several hundred). I'd like to know how I can add the root directory or for that matter any parent directory and all files/folders in said said directory. In other words I'm looking for a recursive add.
I read on this SO page (How to create folder in github repository?) that you can just use
git add directory/
That works fine for me when I'm dealing with the lowest level directory, but when I try the same command on a directory with subdirectories my terminal just sits there and I have to ctrl-c. I can't tell if it's just taking a long time (as I mentioned there are lots of files) or if this is just the wrong way to add a directory with subdirectories.
Apologies in advance if this is a super ignorant question -- I have looked at a lot of blogs/posts/etc and I cannot find a solution that seems to work.
Use the Current Working Directory
Assuming you're on Linux or OS X, from the command line you would do the following:
git add .
from the root of your repository tree. That will add all non-ignored files, including non-empty directories, into the repository.
From the root directory (the one with all the subdirectories), use git add -A.
If you have a ton of subdirectories and files, it may take a long while, so just let it sit there until it's done.

How to force a directory to stay in exact sync with subversion server

I have a directory structure containing a bunch of config files for an application. The structure is maintained in Subversion, and then a few systems have that directory struture checked out. Developers make changes to the struture in the repository, and a script on the servers just runs an "svn update" periodically.
However, sometimes we have people who will inadvertently remove a .svn directory under one of the directories, or stick a file in that doesn't belong. I do what I can to cut off the hands of the procedural unfaithful, but I'd still prefer for my update script to be able to gracefully (well, automatically) handle these changes.
So, what I need is a way to delete files which are not in subversion, and a way to go ahead and stomp on a local directory which is in the way of something in the repository. So, warnings like
Fetching external item into '/path/to/a/dir'
svn: warning: '/path/to/a/dir' is not a working copy
and
Fetching external item into '/path/to/another/dir'
svn: warning: Failed to add directory '/path/to/another/dir': an unversioned directory of the same name already exists
should be automatically resolved.
I'm concerned that I'll have to either parse the svn status output in a script, or use the svn C API and write my own "cleanup" program to make this work (and yes, it has to work this way; rsync / tar+scp, and whatever else aren't options for a variety of reasons). But if anyone has a solution (or partial solution) which takes care of the issue, I'd appreciate hearing about it. :)
How about
rm -rf $project
svn checkout svn+ssh://server/usr/local/svn/repos/$project
I wrote a perl script to first run svn cleanup to handle any locks, and then parse the --xml output of svn status, removing anything which has a bad status (except for externals, which are a little more complicated)
Then I found this:
http://svn.apache.org/repos/asf/subversion/trunk/contrib/client-side/svn-clean
Even though this doesn't do everything I want, I'll probably discard the bulk of my code and just enhance this a little. My XML parsing is not as pretty as it could be, and I'm sure this is somewhat faster than launching a system command (which matters on a very large repository and a command which is run every five minutes).
I ultimately found that script in the answer to this question - Automatically remove Subversion unversioned files - hidden among all the suggestions to use Tortoise SVN.

What's wrong with running a live site from a DVCS clone?

I see insinuations here and there that it's bad to run a live deployment directly off a DVCS clone, and better to export a clean tree or tarball and deploy that. It seems to me that running directly from a DVCS clone has several advantages:
No need to transport the entire codebase on every deployment.
Trivial to update code to any desired version.
Trivial to rollback to previous version if deployment goes badly.
And I can't really see any disadvantages. The presence of the repo files (in my case, a single .hg/ directory) causes no problems.
Is there really any good reason not to run a live deployment off a DVCS clone?
This is what I do. The only "disadvantage" is you can't really version-control the databases or site-generated content (user uploads). It's not a disadvantage at all though because there's no alternative. Just as usual, you need a backup script to copy all that content out.
This isn't an answer but rather an explanation of modern webapp directory layouts. A very simple Python webapp might look something like this:
webapp/
.hg/
webroot/
handler.py
You would set it up so that the webserver only serves static content from webroot/ and if the path doesn't exist in there it asks python (in this case) for that page.
Since none of the server-side source-code is within webroot/, it cannot be served (unless you have a python directive ordering it to serve source code). The same applies to the .hg/ directory.
Note: SVN (< 1.7) and CVS are exceptions as they spray their .svn directories over every subdirectory. In this case that would include webroot/ so, yes, you'd need to make sure you weren't serving hidden files but this is commonly the case anyway.
Well, I know of one.
If someone can gain access to your .hg directory, they could potentially view your source code. But really, access to this directory should be disallowed by the server or by .htaccess files.

Emulating symlink-like behaviour in a source control repository

Suppose I have the following (desired) folder structure:
*CommonProject
*Project#1
----> CommonProject(link)
*Project#2
----> CommonProject(link)
Where the CommonProject is the location of the source belonging to that project, and CommonProject(link) is merely a soft link to the main location. If we imagine this as a tree-view in a visual client, if I expand Project#1 I will see CommonProject there as a subdirectory, even though the files are not actually stored there.
The purpose of this is to enable the following behaviour:
When I check out Project#1 I get the files associated with that project as well as a subfolder CommonProject containing all of its files (as if Project#1 contained the copy of the files in the Version Control repository). Now if I were to modify CommonProject's files inside of Project#1 and was to submit my changes to the repository, the changes would go into the CommonProject location (no file is actually stored locally under Project#1 in the repository). Now if I was to sync Project#2, as it also contains symlink to CommonProject, it will now get my updates.
Essentially the duplication of files only exists on my machine, but in the repository there is only one version of CommonProject.
I know Perforce can’t do this, without juggling 3 specs. This is very complicated and error prone, especially when a lot of people do it. Is there a source control repository out there that can do this? (a pointer to some docs on how it can be done is a plus)
Thank you.
Subversion can directly store symlinks in the repository. This only works for operating systems that support symlinks though, as svn just stores the symlink the same way it would with any other file.
I think what you really want is to link to separate projects though. Subversion supports this through externals and git through submodules. Another alternative is to manage this sort of thing with in your build process, so that some static resources are gathered when you initialize the build. Generally, updating a utilities library that changes often is going to cause stability problems, so you can do this manually (or with clever scripts) when you need to
You'd probably be much better off just storing the projects in a flat directory (1 directory per project, all at the same level), and using whatever you build system or IDE is to link all the stuff together.