How to configure Capistrano to deploy to same directory? - capistrano

I understand that Capistrano (v2.15.5) deploys to a different directory and symlinks them in deploy:create_symlink however we have a proprietary module on our web server that breaks on every deploy as its licensed to a specific directory. I understand the advantages of the symlink and being able to rollback etc. but we need to deploy to the same directory. I can't find any documentation which supports this, is it possible without editing the source?

Provided I understood you correctly, this should help:
set :deploy_to, "<proprietary path>"
This will put the releases dir and the current symlink into <proprietary path>.
For more control over all relevant directories, have a look at deploy.rb from the 2.x branch here:
https://github.com/capistrano/capistrano/blob/legacy-v2/lib/capistrano/recipes/deploy.rb
In particular lines 50-66. You can overwrite all the _cset statements with set just like in the example above.

Related

Keeping SSIS packages under the source control

I store all SSIS packages in Subversion repository, their configuration files as well. Configuration file almost always stored in the same folder where package is.
Problem is - SSIS seems to always store path to configuration file (the one saved in the package itself) as an absolute path.
When someone else checks out folder with the package in the location different from where I had on my development PC the configuration file is not detected (because my absolute path is stored and it doesn't exist on the other developer PC). So another developer has to remove this configuration and add it again from where it is now on his local hard drive. Then changed package is saved which will cause new version to be committed. When I get that version from SVN it will no longer match local path on my PC.
On a related note: another developer may want to change values in configuration file as well. If I later get the latest version of everything from SVN package will no longer work on my PC.
How do you work around these inconveniences?
Another solution is to save your configuration in a database with an environment variable as the first configuration to tell it what database to look in, that's what we do. We have scripts to populate ssisconfig for each server in our source control, but the package uses the actual table data for the database in the environment variable we are using.
Anyone who has heard my SQL Saturday presentations knows I don't much care for XML and this is one of the reasons. A trick to using XML configuration with varying locations is to use an environment variable (indirect configuration) to direct SSIS where it can look for that resource. The big, big downside to this approach is you'd generally need to create an environment variable for each set of configuration files or have a massive, honking .dtsconfig file which becomes painful for versioning.
The option I prefer if XML configuration is a must is that the "variableness" is removed. Developers and admins get together and everyone agrees "there will be a folder everywhere SSIS is done to hold configuration files and that location is X" and then it's just a matter of solving for X. At a previous job, we used D:\ssisdata\configs
#HLGEM's approach of a table for configurations is hands down my favorite approach to SSIS configuration (until you get to 2012 and their project deployment model where configuration is an entirely different animal)
I add a folder called "config" under my projects folder, add it to source control and mantain the config file in this folder. You can also add it to the SSIS project if you like.
I think its a good solution because everybody can have this folder and dowload the config file.
When the package is deployed it will read the config file from where you inform in the deployment manifest so this solution wont impact your development

Why capifony creates both folder /shared/app/logs and /shared/logs (symfony2)

I use Capifony to deploy my Symfony2 project to the production server. As a result of the deploy:setup task, folder called /shared/logs were created. However the symfony2 actually refers to /shared/app/logs to store the log files while the shared/logs remains empty.
Anyone know what's happening?
The shared/logs folder is no more created since capifony version 2.1.7.
I've just checked the latest capistrano deploy recipe in trunk and it seems like this's a default behaviour of Capistrano instead of Capifony. It create folders using only the last part of the path of the shared_children array instead of including the full path. Later on in the task deploy:shared_children of Capfifony it create the sub folders with full path.

Packaging with NAnt, how to handle different environments

I'm using NAnt to build an ASP.NET MVC project.
The NAnt script then creates a zip package, containing a deploy script and all the necessary files.
The deploy script backs up the current running website, sets up the newer version of the website and updates the DB.
This works fine for a single environment.
However, we're asked more and more to set up a Staging/Acceptance environment next to the production. These environments, of course, differ in file structure, DB server, config settings etc.
How can I best handle this in the deploy scripts? I don't want to create separate variables for each environment, distinguishable by name only.
Providing defaults and providing the variables in separate files seems more logical.
Does anyone have practical experiences with this?
Store the things that you think are likely to change between environments in config files.
Visual Studio can do the heavy lifting here if you like; you can create settings and specify default values from the Settings tab of a Visual Studio project's properties.
This will create the config file for you and provide strongly-typed access through Properties.Settings.Default.
As for handling multiple environments through your build, I've seen some people recommend maintaining multiple copies of the config files - one for each environment for example - and others recommend using nant to modify the config files during the build or deployment phase. You can use a property passed to nant on the command line (for example) to select which environment you are building (or deploying, depending on how you're doing it).
I don't recommend either of these approaches because:
They both require changes to your build to support new environments.
If you change a setting in a deployed environment and forget to update the build then the next deployment will reset the change (somewhat defeating the point of config settings).
If someone creates a new environment (lets say they want to explore issues arising from upgrading to a new version of SQL Server for example) and doesn't fancy creating all new config files in the build system, they might decide to just use an existing environment's settings. Let's say they choose to deploy using the live settings and forget to change something afterwards. Your new 'test' environment could now be pointing to live kit.
I create a copy of each config file (called web.config.example, for example) and comment out the settings within them (unless they have meaningful defaults). I check these in and have those deployed instead of the real web.config (that is, web.config is NOT deployed automatically. web.config.example is deployed as web.config.example.
The admin of the new environment will have to copy and rename the file to web.config and provide meaningful values). I also put all the calls to the settings behind my own wrapper class - if a mandatory setting is missing I throw an exception.
The build and my environments no longer depend on each other - one build can be deployed to any environment.
If a setting is missing (a new environment or a new setting in an existing environment) then you get a nice clear exception raised to tell the admin what to do.
Existing settings are not altered after an upgrade because only the .example files were updated. It's an admin task to compare the current settings with the latest example and revise if necessary.
To configure the deployment, you could put all the environmental settings (install paths, etc) into nant properties and move them into a separate file (settings.build for example) then use the nant include task to include that file at the top of your deployment file (deploy.build for example). You can then deploy a new version of deploy.build without overwriting your config changes as they are in settings.build. If a new property is introduced into deploy.build nant will fail with a nice message to tell you that you haven't set that property.

How should I handle Sphinx configuration in version control?

I have a problem with my development workflow and Sphinx. I want to keep configuration file for Sphinx in version control so it's easier to manage. This means it's easier to link the file to code updates, etc ... However, the configuration file is stored in /usr/local/etc.
There are two solutions I can think of. Store the file in the repository and move it to the correct folder on deployment or recompile Sphinx to look for the file in my repository. I had a suggestion from someone to use a symlink, but that still requires a change on deployment.
Is there an elegant solution in Sphinx I'm missing?
perhaps have the /usr/local/etc/sphinx.conf file be a script that pulls the actual sphinx config from the file in your repo.
http://sphinxsearch.com/docs/current.html#rel098 scroll down to general and you'll see:
"added scripting (shebang syntax) support to config files (example: #!/usr/bin/php in the first line)"

Huge amount of JAR files in jboss/server/web/tmp/vfs-nested.tmp directory

Sometimes we have huge amount of JAR files in jboss/server/web/tmp/vfs-nested.tmp directory.
For example today this directory contained over 350k jar files.
But on other hosts there are only 2 jar files in this directory.
What can be the root cause of this problem?
We use JBoss 5.1
UPDATE:
I found following information in release notes for JBoss 5.1.0.GA:
JBoss VFS provides a set of different
switches to control it's internal
behavior. JBoss AS sets
jboss.vfs.forceCopy=true by default.
To see all the provided VFS flags
check out the code of the
VFSUtils.java class.
So I do not understand what should I set?
Should I set -Djboss.vfs.forceNoCopy=true or -Djboss.vfs.forceCopy=false?
Or should I set both of them?
UPDATE 1:
I have read entire thread http://community.jboss.org/thread/2148?start=0&tstart=0
and now I am not shure that I should change either jboss.vfs.forceCopy or jboss.vfs.forceNoCopy.
According to this thread I will have OutOfMemory error instead of huge amount of files in tmp dir.
From here: http://sourceforge.net/project/shownotes.php?release_id=575410
"Excessive nestedjarNNN.tmp files in the tmp directory. The VFS unwraps nested jars by extracting the nested jar into a tmp file in the java tmp directory. This can result in a large number of files that fill up the tmp directory. You can disable this behavior by setting -Djboss.vfs.forceNoCopy=true on command line used to start jboss. This will be enabled by default in a future release, JBAS-4389."
jskaggz has a good answer. In addition, I have this in the beginning of my run.bat file:
rmdir /s /q c:\apps\jboss-5.1.0.ga\server\default\tmp
rmdir /s /q c:\apps\jboss-5.1.0.ga\server\default\work
rmdir /s /q c:\apps\jboss-5.1.0.ga\server\default\log
mkdir c:\apps\jboss-5.1.0.ga\server\default\tmp
mkdir c:\apps\jboss-5.1.0.ga\server\default\work
mkdir c:\apps\jboss-5.1.0.ga\server\default\log
echo --- Cleared temp folders ---
I've had problems with old copies of classes hanging around, so this seems to help.
We have solved this problem by exploded deployment ( works for war and ear) as described in jboss documentation http://docs.jboss.org/jbossas/docs/Administration_And_Configuration_Guide/5/html/ch03s01.html
That's way vfs is not used.
I had the same issue described above in production and resolved it with the following solution.
Added java options
-Djboss.vfs.cache=org.jboss.virtual.plugins.cache.IterableTimedVFSCache
-Djboss.vfs.cache.TimedPolicyCaching.lifetime=1440
My setup also defines additional deployment directories so I needed to add these additional directories to vfs.xml file located in $JBOSS_SERVER_HOME/conf/bootstrap/ in order to see the benefit.
The lifetime setting I think is in minutes so I set it to a day as I have a scheduled restart of the server overnight.
Prior to finding this solution I had also tried using -Djboss.vfs.forceNoCopy=true and -Djboss.vfs.forceCopy=false
This appeared to work but I noticed the application ran a lot slower - presumably because these settings turn vfs caching off.
My Jboss version is jboss-5.1.0.GA
and my application runs in a cluster on production.
Found a lot others having the same problem running in cluster (or farm) environments.
https://issues.jboss.org/browse/JBAS-7126 desribes to solve the problem having a farm directory as deployment directory.
I had the same problem using a 2nd deploy directory.
The jar files out of my applications coming from this 2nd deploy directory got copied until the disk was full.
Tried adding the 2nd deploy directory the same way as at https://issues.jboss.org/browse/JBAS-7126 described for the farm directory.
It works well!
We were facing the same issue and were able to circumvent the issue by using a farm directory as deployment directory.
After putting that process in place we were facing one more issue due to the nature of our DEV environment ( We have clustered environment and we have many developers deploying on the shared DEV environment ) of not getting a consistent results while we were deploying the EARs and WARs that way .We circumvented the issue by making sure that the EARs and JARs that are being deployed are TOUCHED (http://en.wikipedia.org/wiki/Touch_(Unix) ) on the servers to make sure that inconsistencies are avoided .