How would I change the temporary dir that Capistrano uses?
Example: Instead of /tmp, I want to use /home/user/tmp
My current VPS has /tmp mounted as noexec, which gives me permission denied errors while trying to run cap production deploy.
In Capistrano 3,
set :tmp_dir, '/home/user/tmp'
Are you talking about the remote tmp directory? If yes, here an example::
set :copy_remote_dir, deploy_to
This will change the default tmp directory where the archive was copied on the remote server to the deployment directory instead.
For the ones who are still using Capistrano 2, tmp_dir does not exist. However you can use copy_dir instead:
set :copy_dir, '/home/user/tmp'
Link to the source code: https://github.com/capistrano/capistrano/blob/legacy-v2/lib/capistrano/recipes/deploy/strategy/copy.rb#L275
Related
In both cmd and powershell, when I input conda init powershell, it always failed as follows:
(ifcmapping) C:\Windows\system32>conda init powershell
no change C:\Users\haoli\anaconda3\Scripts\conda.exe
no change C:\Users\haoli\anaconda3\Scripts\conda-env.exe
no change C:\Users\haoli\anaconda3\Scripts\conda-script.py
no change C:\Users\haoli\anaconda3\Scripts\conda-env-script.py
no change C:\Users\haoli\anaconda3\condabin\conda.bat
no change C:\Users\haoli\anaconda3\Library\bin\conda.bat
no change C:\Users\haoli\anaconda3\condabin\_conda_activate.bat
no change C:\Users\haoli\anaconda3\condabin\rename_tmp.bat
no change C:\Users\haoli\anaconda3\condabin\conda_auto_activate.bat
no change C:\Users\haoli\anaconda3\condabin\conda_hook.bat
no change C:\Users\haoli\anaconda3\Scripts\activate.bat
no change C:\Users\haoli\anaconda3\condabin\activate.bat
no change C:\Users\haoli\anaconda3\condabin\deactivate.bat
no change C:\Users\haoli\anaconda3\Scripts\activate
no change C:\Users\haoli\anaconda3\Scripts\deactivate
no change C:\Users\haoli\anaconda3\etc\profile.d\conda.sh
no change C:\Users\haoli\anaconda3\etc\fish\conf.d\conda.fish
no change C:\Users\haoli\anaconda3\shell\condabin\Conda.psm1
no change C:\Users\haoli\anaconda3\shell\condabin\conda-hook.ps1
no change C:\Users\haoli\anaconda3\Lib\site-packages\xontrib\conda.xsh
no change C:\Users\haoli\anaconda3\etc\profile.d\conda.csh
needs sudo C:\Users\haoli\OneDrive\??\WindowsPowerShell\profile.ps1
No action taken.
Operation failed.
I already run as Admin. How to solve it? Thanks!
In here
needs sudo C:\Users\haoli\OneDrive??\WindowsPowerShell\profile.ps1
I assume the ??\ are not English letters, you need to change it to English.
Check onedrive online, it is English on my local PC but not in English in the cloud. I changed the folder name to 'Documents'(where my PowerShell is at) in the cloud and synced the change to local. Worked for me.
On Windows when you let OneDrive back up your Documents folder, the folder points to the Documents folder on the cloud, and the actual name of it depends on the language of your system, which I assume it is "文档" in your case.
I just disabled the Documents backup in OneDrive settings, and re-run conda init, which gave me the result
modified C:\Users\username\Documents\WindowsPowerShell\profile.ps1
If you'd prefer not disabling the OneDrive backup, you can try Zixin's answer, I am just not sure if you should mess up with the default settings of the backup folder and if it will cause problem.
I'm using lftp to deploy a website via Travis CI. There is a build process before the deployment, for that reason a build directory is present and pushed to the root of the ftp server.
lftp $FTP_URL -e "glob -d mirror build . --reverse --delete-first --parallel=10 && exit"
It works quite well, but I dislike to have a downtime / temporary PHP parse errors because of missing files on my website. What is the best way to work arround that issue?
My first approach was an option to set a temporary directory, but the lftp man page says there is only a options for temporary files. I still tried the option but it didn't help.
My second approach was to use "mirror build temp" to use a temporary folder and then replace the root with it. The problem here is, that I cannot exclude the temp folder while deleting the old files and folders like rm -rf *.
For small changes not involving adding/removing php files set xfer:use-temp-file should be sufficient. Also don't use --remove-first, as it causes lftp to delete obsolete files before uploading.
For larger changes I'd create a separate directory for each version of the site and redirect the web server to the directory using .htaccess mod_rewrite or some other configuration file. This technique will allow atomic switch to the new version (and back if needed). Besides, you will be able to do final pre-production testing of the new version if you redirect to the new version conditionally based on your IP address or using some other rule.
If you don't want to re-upload whole site for each new version and the FTP server supports FXP with itself, then you can copy old version to a new directory using mirror old_directory ftp://user#example.com/new_directory, then update the new directory using mirror -eR local_dir new_directory.
This is a zero downtown pattern - each placeholder should be replaced:
lftp $FTP_URL -e "mirror {SOURCE} {TARGET}-new-{TIMESTAMP} --reverse --delete-first;
mv {TARGET} {TARGET}-old-{TIMESTAMP};
mv {TARGET}-new-{TIMESTAMP} {TARGET};
rm -rf {TARGET}-old-{TIMESTAMP};
exit"
I'm using Sphinx on a Linux production server as well as a Windows dev machine running WampServer.
The index configurations in sphinx.conf each require a path setting for the output file name. Because the filesystems on the production server and dev machine are different, I have to have two lines and then comment one out depending on which server I'm using.
#path = /path/to/folder/name #LIVE
path = C:\wamp\www\site\path\to\folder\name #LOCALHOST
Since I have lots of indexes, it gets really old having to constantly comment and uncomment dozens of lines every time I need to update the file.
Using relative paths would be the ideal solution, but when I tried that I received the following error when running the indexer:
FATAL: failed to open ../folder/name.tmp.spl: Invalid argument, will not index. Try --rotate option.
Is it possible to use relative paths in sphinx.conf?
You can use relative paths, but its kind of tricky because you the various utilities will have different working directories.
eg On windows the searchd service, will start IIRC with a working directory of $WINDIR$\System32
on linux, via crontab, I think it has working directory left over from previously, so would have to change the folder in the actual command line
... ie its not relative to the config file, its relative to the current working directory.
Personally I use a version control system (SVN actually) to manage it. The version from Dev, is always the one commited to the repository, the 'working copy' on the LIVE server, has had the paths edited to the right location. So when 'update' to the latest file, only changes are merged leaving the local filepaths in tact.
Other people use a dynamic config file. The config file can be a script (php/python/perl etc) - but this only works on linux so wont help you.
Or can just have a 'publish' script. Basically, you edit a 'master' config file, and one that can be freely copied to all servers. Then a 'publish' script, that writes the apprirate local path. It could do it with some pretty simple search replace.
<?php
if (trim(`hostname`) == 'live') {
$path = '/path/to/folder/';
} else {
$path = 'C:\wamp\www\site\path\to\folder\`;
}
$contents = file_get_contents('sphinx.conf.master');
$contents = str_replace('$path',$path,$contents);
file_put_contents('sphinx.conf',$contents);
Then have path = $path\name in the master config file, which will get replaced to the proper path, when run the script on the local machine
Since I'm running my rails app as root, it creates files that are owned by root in the tmp directory. Because of this
cap production deploy:cleanup
can't remove old releases because it is not run as root.
I've looked at the capistrano v3 code, but I don't see a way to run the cleanup command as root. Is this option missing or is this problem occurring because I'm doing something wrong in another place of the deployment flow.
I start the app as root because I need to bind to port 80.
What you can also do is triggering a task just before cleaning up the old release :
namespace :deploy do
before :cleanup, :cleanup_permissions
desc 'Set permissions on old releases before cleanup'
task :cleanup_permissions do
on release_roles :all do |host|
releases = capture(:ls, '-x', releases_path).split
if releases.count >= fetch(:keep_releases)
info "Cleaning permissions on old releases"
directories = (releases - releases.last(1))
if directories.any?
directories.each do |release|
within releases_path.join(release) do
execute :sudo, :chown, '-R', 'deployuser', 'path/to/your/files/writtend/by/root'
end
end
else
info t(:no_old_releases, host: host.to_s, keep_releases: fetch(:keep_releases))
end
end
end
end
end
Note that you'll need to give your deployment user the right to execute this specific sudo command (with a sudoers definition file.
I've looked at the capistrano v3 code, but I don't see a way to run the cleanup command as root. Is this option missing or is this problem occurring because I'm doing something wrong in another place of the deployment flow.
There is no secret sauce in Capistrano, we rely on you having correctly set up the permissions for your deploy user as documented at http://www.capistranorb.com/
Removing directories requires write permissions on the parent directory, that is to say, given the following directory structure:
/var/www/releases/
\- 20131015180000
\- 20131015181500
\- 20131015183000
You need write permission on the /var/www/releases/ directory, as the list of files and directory in that directory, is stored in the directory.
From a similar StackSverflow question:
In UNIX and Linux, the ability to remove a file is not determined by the access bits of that file. It is determined by the access bits of the directory which contains the file.
From the Wikipedia article on Unix File Permissions:
The write permission grants the ability to modify a file. When set for a directory, this permission grants the ability to modify entries in the directory. This includes creating files, deleting files, and renaming files.
One of the things you may want to do is to create a group called app or web on your linux box and add root and the deploy user to the same group. Then, as part of your deployment, chmod the release_path permissions to g+s which will ensure that any new files created by root user are group writable.
You should then be able to remove the old folders as deploy user.
I was running into similar issues, so, to confirm, logged into my Web server via SSH, and tried rm -rf [directory], which also failed due to the same permissions issues, even logged in as admin. Running chmod -R 755 [directory]/, then rm -rf [directory]/ did work, though.
To fix it, in the project's silverstripe.rake file, I changed the command being run from:
execute :chown, "-R [user]:[group] /path/to/project"
to:
execute :chmod, "-R 755 /path/to/project"
So far, no more issues with deleting the oldest release when running cap [release name] deploy
I am new to Capistranoand I saw there is shared folder and also option :linked_files. I think shared folder is used to keep files between releases. But my question is, how do files end up being in the shared folder?
Also, if I want to symlink another directory to the current directory e.g. static folder at some path, how do I put it at the linked_dirs ?
Lastly how to set chmod 755 to linked_files and linked_dirs.
Thank you.
Folders inside your app are symlinks to folders in the shared directory. If your app writes to log/production.log, it will actually write to ../shared/log/production.log. That's how the files end up being in the shared folder.
You can see how this works by looking at the feature specs or tests in Capistrano.
If you want to chmod these shared files, you can just do it once directly over ssh since they won't ever be modified by Capistrano after they've been created.
To add a linked directory, in your deploy.rb:
set :linked_dirs, %w{bin log tmp/backup tmp/pids tmp/cache tmp/sockets vendor/bundle}
or
set :linked_dirs, fetch(:linked_dirs) + %w{public/system}
Capistrano 3.5+
Capistrano 3.5 introduced append for array fields. From the official docs, you should use these:
For Shared Files:
append :linked_files, %w{config/database.yml}
For Shared Directories:
append :linked_dirs, %w{bin log public/uploads vendor/bundle}
I've written a task for Capistrano 3 to upload your config files to the shared folder of each of your servers, it'll check these directories in order:
config/deploy/config/:stage/*.yml
config/deploy/config/*.yml
And upload all config files found. It'll only upload the files if they've changed. Note also that if you have the same file on both directories then the second one will be ignored.
Here's the code: https://gist.github.com/Jesus/448d618c83fb0445ebbf
One last thing, this task is just uploading the config. files to your remote shared folder, you still need to set linked_files in config/deploy.rb, eg:
set :linked_files, %w{config/database.yml config/aws.yml}
UPDATE:
If you're using Git, you'll probably want to ignore these files:
echo "config/deploy/config/*" >> .gitignore
There are 3 simple steps you can follow to put a file that you don't want to change in consecutive releases; add your file to linked_files list.
set :linked_files, fetch(:linked_files, []).push('config.php')
Select all the files that you want to share. Put this file from your local to remote server through scp
scp config.php deployer#amazon:~/capistrano/shared/config.php
Now, deploy through the command given below:
bundle exec cap staging deploy
of course, staging can be changed as per requirements may be production,sandbox etc.
One more thing, because you don't want your team members to commit such files. So, put this file to your .gitignore file. And push it to git remote repo.
For Capistrano 3.5+, as specified in official doc :
append :linked_dirs, ".bundle", "tmp"
For me non of the above worked so I ended up adding two functions to the end of the deployment process:
namespace :your_company do
desc "remove index.php"
task :rm_files do
on roles(:all) do
execute "rm -rf #{release_path}/index.php"
end
end
end
namespace :your_company do
desc "add symlink to index.php"
task :add_files do
on roles(:all) do
execute "ln -sf #{shared_path }/index.php #{release_path}/index.php"
end
end
end
after "deploy:finished", "your_company:rm_files"
after "deploy:finished", "your_company:add_files"