Capistrano deploy how to use use_sudo and admin_runner - deployment

I'm trying to configure Capistrano so that it works for our server setup. We are deploying symfony projects so i'm also using capifony. I'm still experiencing some problems with permissions.
On our server every project runs as a project user, so every project has it's own user. So i configured use_sudo and set it to true and i configured the admin_runner to be the user of the project. But it still didn't work so i modified the capifony to start using try_sudo in stead of the regular run. Which made it work a little bit better. But i'm a bit confused about what to use in which case. You have try_sudo, sudo and run. But which is needed for which use-case?
When you use run i think it'll always be your local user
try_sudo i think will check if the use_sudo flag is true if so it will use the sudo command if not it will use the local user. If you have admin_runner configured it will sudo to the user configured as admin_runner
sudo will always try to sudo
Now my problem is the deploy:symlink method this is also just a regular run command so it executes as the local user, which gives permission problems when i try to view the website.
So can anyone tell me if my description of the 3 commands is correct? and also does anyone know how the admin_runner and use_sudo is suposed to be used, so that the symlink is also being done correctly (and also all other commands done by capistrano)?
kind regards,
Daan

Apologies for such a tardy answer Daan. Your understanding of Capistrano is correct. Note also that the :use_sudo flag defaults to true.
In Capistrano 2.11.2, you'll find lib/capistrano/configuration/variables.rb:
_cset(:run_method) { fetch(:use_sudo, true) ? :sudo : :run }
and lib/capistrano/recipes/deploy.rb:
def try_sudo(*args)
options = args.last.is_a?(Hash) ? args.pop : {}
command = args.shift
raise ArgumentError, "too many arguments" if args.any?
as = options.fetch(:as, fetch(:admin_runner, nil))
via = fetch(:run_method, :sudo)
if command
invoke_command(command, :via => via, :as => as)
elsif via == :sudo
sudo(:as => as)
else
""
end
end
Perhaps your permissions problem involves your server, running as a normal user, not being able to read the contents of the release directory that your current symlink is pointing to?

Related

Error in Google Cloud Shell Commands while working on the lab (Securing Google Cloud with CFT Scorecard)

I am working in a GCP lab (Securing Google Cloud with CFT Scorecard). All instructions for the lab are given.
First I have to run the following two commands to set environment variables
export GOOGLE_PROJECT=$DEVSHELL_PROJECT_ID
export CAI_BUCKET_NAME=cai-$GOOGLE_PROJECT
In the second command given above I don't know what to replace with my own credentials? May be that is the reason I am getting error.
Now I have to enable the "cloudasset.googleapis.com" gcloud service. For this they gave the following command.
gcloud services enable cloudasset.googleapis.com \
--project $GOOGLE_PROJECT
Error for this is given in the screeshot attached herewith:
Error in the serviec enabling command
Next step is to clone the policy: The given command for that is:
git clone https://github.com/forseti-security/policy-library.git
After that they said: "You realize Policy Library enforces policies that are located in the policy-library/policies/constraints folder, in which case you can copy a sample policy from the samples directory into the constraints directory".
and gave this command:
cp policy-library/samples/storage_blacklist_public.yaml policy-library/policies/constraints/
On running this command I received this:
error on running the directory command
Finally they said "Create the bucket that will hold the data that Cloud Asset Inventory (CAI) will export" and gave the following command:
gsutil mb -l us-central1 -p $GOOGLE_PROJECT gs://$CAI_BUCKET_NAME
I am confused in where to replace my own credentials like in the place of project_Id I wrote my own project id.
Also I don't know these errors are ocurring. Kindly help me.
I'm unable to access the tutorial.
What happens if you run the following:
echo ${DEVSHELL_PROJECT_ID}
I suspect you'll get an empty result because I think this environment variable isn't actually set.
I think it should be:
echo ${DEVSHELL_GCLOUD_CONFIG}
Does that return a result?
If so, perhaps try using that variable instead:
export GOOGLE_PROJECT=${DEVSHELL_GCLOUD_CONFIG}
export CAI_BUCKET_NAME=cai-${GOOGLE_PROJECT}
It's not entirely clear to me why this tutorial is using this approach but, if the above works, it may get you further along.
We're you asked to create a Google Cloud Platform project?
As per the shared error, this seems to be because your env variable GOOGLE_PROJECT is not set. You can verify it by using echo $GOOGLE_PROJECT and seeing whether it returns the project ID or not. You could also use echo $DEVSHELL_PROJECT_ID. If that returns the project ID and the former doesn't, it means that you didn't export the variable as stated at the beginning.
If the problem is that GOOGLE_PROJECT doesn't have any value, there are different approaches on how to solve it.
Set the env variable as you explained at the beginning. Obviously this will only work if the variable DEVSHELL_PROJECT_ID is also set.
export GOOGLE_PROJECT=$DEVSHELL_PROJECT_ID
Manually set the project ID into that variable. This is far from ideal because in Qwiklabs they create a new temporal project on every lab, so this would've only worked if you were still on that project. The project ID can be seen on both of your shared screenshots.
export GOOGLE_PROJECT=qwiklabs-gcp-03-c6e1787dc09e
Avoid using the argument --project. According to the documentation, the aforementioned argument is optional and if none is used the command will take the one by default, which will be on the configuration settings. You can get the current project by using this:
gcloud config get-value project
If the previous command matches the project ID you want to use, you can simply issue the following command:
gcloud services enable cloudasset.googleapis.com
Notice that the project ID is not being explicitly mentioned using --project.
Regarding your issue with the GitHub file, I have checked the repository and the file storage_blacklist_public.yaml doesn't seem to be in the directory policy-library/samples. There seems to be a trace that it was once there, but it isn't anymore, they should probably update the lab as it isn't anymore.
About your credentials confusion, you don't have to use your own project ID, just the one given on your lab. If I recall properly all the needed data should be on the left side of the lab. Still, you shouldn't need to authenticate in a normal situation as you are already logged in your temporal project if you are accessing it form the Cloud Shell, which is where you should be doing all this.
Adding this for the later versions
in the gcloud shell you can set a temp variable for the current project id with
PROJECT_ID="$(gcloud config get-value project)"
then use like
--project ${PROJECT_ID}

Disable a standard systemd service in Yocto build

I need to start my own systemd service, let's call it custom.service. I know how to write a recipe for it to be added and enabled on boot:
SYSTEMD_SERVICE_${PN} = "custom.service"
SYSTEMD_AUTO_ENABLE_${PN} = "enable"
However, it conflicts with one of the default systemd services - systemd-timesyncd.service.
Is there a nice preferred way to disable that default systemd service in my bitbake file even though the systemd_XX.bb actually enables it?
I can create a systemd_%.bbappend file to modify the systemd settings, but I can't locate the place where one service can be disabled leaving all others enabled.
The working solution I found is to remove the timesyncd altogether using
PACKAGECONFIG_remove = "timesyncd"
But I wonder if this is a appropriate way and if there is a way to just disable it, but leave in the system.
How about adding a .bbappend recipe for the conflicting service you want disabled. In it, you would add:
SYSTEMD_AUTO_ENABLE_${PN} = "disable"
If the system runs fine with the other package removed, then removing the package is a preferred solution. Fewer packages means a simpler system.
Usually you would set SYSTEMD_AUTO_ENABLE_${PN} = "disable" and that would let the service be part of image but disabled on boot. However for systemd which provides a lot of default service units this may not be a solution you might want to deploy. You could surgically delete the symlink in etc which will ensure that service is not started automatically on boot but the .service file is still part of image. So add following to systemd_%.bbappend file in your layer
do_install_append() {
rm -rf ${D}${sysconfdir}/systemd/system/sysinit.target.wants/systemd-timesyncd.service
}
There are other ways to disable this e.g. using systemd presets as described here
Use the systemd.preset — Service enablement presets and in particular following steps.
Create a .bbappend file meta-xxx/recipes-core/systemd/systemd_%.bbappend with this content:
do_configure_append() {
#disabling autostart of systemd-timesyncd
sed -i -e "s/enable systemd-timesyncd.service/disable systemd-timesyncd.service/g" ${S}/presets/90-systemd.preset
}
In my yocto-based Linux distribution (yocto zeus release) above steps are enough to disable the service which remains installed.
In the output distribution previous steps modify the file /lib/systemd/system-preset/90-systemd.preset.
After the modification, in that file, appear the row: disable systemd-timesyncd.service and this row substitutes the raw: enable systemd-timesyncd.service
At this link there are some information about the topic: systemd.preset — Service enablement presets.
Other useful.
I was not able to use SYSTEMD_AUTO_ENABLE_${PN} = "disable" in this context.
For other recipes (for example dnsmasq_2.82.bb) the previous assignment works correctly and I have used it to enable (or disable) a service in the yocto distribution.
I think the "official" way to do this is to have something like this somewhere in your project:
PACKAGECONFIG_append_pn-systemd = "--disable-timesyncd"
This does basically the same you already suggested. To simply not enable the service you have to do it manually since you can modify the auto enable only per recipe.

Capistrano floating literal anymore; put 0 before dot (SyntaxError)

I have a little bit of experience using Capistrano but only on projects that are already using it and I am currently trying to set it up on a new project. I have created my config/deploy.rb file, added the appropriate configuration and am now trying to run "cap deploy:setup" to set the correct capistrano structure up on my remote server however now when I try to run this or any other Capistrano commands I get:
/Library/Ruby/Gems/2.0.0/gems/capistrano-2.15.5/lib/capistrano/configuration/loading.rb:93:in `instance_eval': ./config/deploy.rb:8: no . floating literal anymore; put 0 before dot (SyntaxError)
role :web, xxx.x.xx.xx
It looks like an issue with the format of the IP that I have provided for my app but no variations seem to work.
Am I missing something obvious here?
Has anyone else come across this issue?
Thanks
James
I found a resolution for this in the end through trial and error. It turns out that I needed to wrap the role :web and role :app values in quotes:
"xx.xx.xx.xx"
Then it worked correctly.

Webistrano - how to clear global HTML cache after deployment

I am new to webistrano so apologies if this is a trivial matter...
I am using webistrano to deploy php code to several production servers, this is all working great. My problem is that I need to clear HTML cache on my cache servers (varnish cache) after the code update. I can't figure out how to build a recipe that will be executed on the webistrano machine (and will run the relevant shell script that will clear the cache) and not on each of the deployment target machines.
Thanks for the help,
Yariv
Simpliest method is to execute varnishadm tool with proper parameters inside deploy:restart
set :varnish_ban_pattern, "req.url ~ ^/"
set :varnish_terminal_address_port, "127.0.0.1:6082"
set :varnish_varnishadm, "/usr/bin/varnishadm"
task :restart, :roles => :web do
run "#{varnish_varnishadm} -T #{varnish_terminal_address_port} ban \"#{varnish_ban_pattern}\""
end
Thanks for the answer. I actually need to do some more stuf than to only clear the the cache so I will execute a bash script locally as described in below:
How do I execute a Capistrano task locally?

Unicorn restart issue with capistrano

We're deploying with cap and using a script that send USR2 to the unicorn process to reload and it usually works but every once in a while it will fail. When that happens looking in the unicorn log reveals that it's looking for a Gemfile in an old release directory that no longer exists.
Exception :
/usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.0.21/lib/bundler/definition.rb:14:in `build': /var/www/railsapps/inventory/releases/20111128233407/Gemfile not found (Bundler::GemfileNotFound)
To clarify that's not the current release but an older one that's since been removed.
When it works it does seem to work correctly - ie it does pickup the new code - so I don't think it's somehow stuck referring to the old release.
Any ideas?
In your unicorn.rb add the before_exec block
current_path = "/var/www/html/my project/current"
before_exec do |server|
ENV['BUNDLE_GEMFILE'] = "#{current_path}/Gemfile"
end
Read more about it here http://blog.willj.net/2011/08/02/fixing-the-gemfile-not-found-bundlergemfilenotfound-error/
You should set the BUNDLE_GEMFILE environment variable before you start the server, point it at current/Gemfile.