Does chef overwrite file owners when deploying? Can it be avoided? - deployment

I have a chef cookbook for deploying our webapp, there are some folders and files that need to be created and owned by www-data:www-data. When deploying the application I'm doing it by using the chef's deploy command like this in my deploy.rb recipe:
deploy "#{app_dir}" do
repository tmp_dir
user "root"
group "root"
environment app[:environment]
symlink_before_migrate({})
end
And then the creation and permission set for those files and folders are done in the before_symlink.rb script like this:
execute "ensure correct owner of storage folder" do
command "chown -R www-data:www-data #{release_path}/storage"
end
I've been debugging and I've checked this:
chown is executed, and the user exists, I can see it in the chef logs.
If I execute a sleep command right at the end of the before_symlink and then ssh into the machine I can see in the storage folder that the folder is owned by www-data as I wish.
If I execute a sleep command right after the deploy command on deploy.rb and then ssh to the machine, now the release folder will be linked to the current folder, and every file and folder will be owned by root:root causing permission errors.
So it seems that at the end of the deploy chef seems to overwrite the owner for every deployed file to the user making the deploy. Is this true? Is there any way to keep files and folders with the owner set on before_symlink.rb?

Really really don't use the deploy resource. What you want is probably a git resource, and its user property.

Related

Moving tftpboot folder

Ok, so I am trying to move the /var/lib/tftpboot folder the "proper" way to a dedicated partition. To accomplish this goal I have setup a separate partition called /app and moved the tftpboot folder there.
Issue 1: Symlink
After I moved the folder I created a symlink from the new directory to the old directory using the ln -s /app/tftpboot /var/lib/ command. After doing this I am unable to successfully restart the service using systemctl restart tftp. However, if I just update the path listed in the service file and the config file the service boots fine.

Permission issue while executing an ssh task in azure pipeline

What I am trying to do is to run a few lines of shell script in a remote machine via an azure pipeline. I used the ssh Deployment Task to accomplish this. I have used the script path argument to point the .sh file that contains the script that should be ran. The ssh task was able to connect to the remote host, but the following permission error pops up.
Can someone tell me what's going wrong here. The .sh file that i am using was created in the Linux box itself and has got the permission level set to 777 before moving to the repo.
There is an another CopyFilesOverSSH#0 task in the pipeline in the same stage which works perfectly without any permission issues for the same user.
2021-12-31T12:41:42.1763039Z ##[section]Starting: SSH
2021-12-31T12:41:42.1894277Z ==============================================================================
2021-12-31T12:41:42.1894676Z Task : SSH
2021-12-31T12:41:42.1895010Z Description : Run shell commands or a script on a remote machine using SSH
2021-12-31T12:41:42.1895347Z Version : 0.189.0
2021-12-31T12:41:42.1895637Z Author : Microsoft Corporation
2021-12-31T12:41:42.1896023Z Help : https://learn.microsoft.com/azure/devops/pipelines/tasks/deploy/ssh
2021-12-31T12:41:42.1896437Z ==============================================================================
2021-12-31T12:41:42.8200834Z Trying to establish an SSH connection to ***#80.xxx.xxx.xxx:22
2021-12-31T12:41:43.1333018Z Successfully connected.
2021-12-31T12:41:43.5698433Z ##[error]Failed to copy script to remote machine. Error: Error: put: Permission denied //checkFileAvailability.sh.
2021-12-31T12:41:43.6050230Z ##[section]Finishing: SSH
Firstly, if you want to copy files to the remote machine, then it's recommend to use Copy Files Over SSH task. This task allows you to connect to a remote machine using SSH and copy files matching a set of minimatch patterns from specified source folder to target folder on the remote machine. Supported protocols for file transfer are SFTP and SCP via SFTP.
For the SSH Deployment task. This task enables you to connect to a remote machine using SSH and run commands or a script.
According to your error message, the SSH is successfully connected, but failed to copy script to the remote machine. It appears that the service account doesn't have the permission to copy the specified file to the specific path on the remote machine. Please check your source file path permission settings. Please also try to using inline script instead of the Script File to check if it works.
I had the same issue when run SSH script task under a user which was not a root. So for inline script to run under different user, that user should have:
Read/Write/Execute access to root folder, as TFS put all commands into generated bash script file and copy it to target machine root folder (below is another command, which is executed on already copied script file)
tr -d '\015' <./sshscript_099d4e8c-44ac-482d-b1bf-84a52c7ab810> ./sshscript_099d4e8c-44ac-482d-b1bf-84a52c7ab810._unix
User should have home directory as TFS switch to it
So to fix this issue I have granted rwx permissions to everyone for the root folder
chmod 777 /
ls -ld /
drwxrwxrwx 20 root root 4096 Feb 10 14:54 /
And make sure that home folder for my user exists

Click to deploy MEAN Stack on Google Compute Engine Clone Repo Locally

On Compute Engine, using the click-to-deploy option for MEAN, how can we clone the repo of the sample app it locally creates so that we can start editing and pushing changes?
I tried gcloud init my-project however all it seems to do is initialize an empty repo. And indeed when I go to "source code" section for that project, there is nothing there.
How do I get the source code for this particular instance, setup a repo locally for it and then deploy changes to the same instance? Any help would be greatly appreciated.
OK, well I have made some progress. Once you click-to-deploy GCE will present you with a command to access your MEAN stack application through an SSH tunnel.
It will look something like this:
gcloud compute ssh --ssh-flag=-L3000:localhost:3000 --project=project-id --zone us-central1-f instance-name
You can change the port numbers as long as your firewall rules allow that specific port.
https://console.developers.google.com/project/your-project-id/firewalls/list
Once you SSH in, you will see the target directory, named the same as you told mean-io to use as the name of the application when you ran mean init
I first made a copy of this folder where mine was named "flow" cp -r flow flow-bck and then I removed some unnecessary directories with:
cd flow-bck && rm -rf node_modules bower_components .bower* .git
All of this to setup copying that folder to my local machine using gcloud compute copy-files availabe after installing Google Cloud SDK.
On my local machine, I ran the following:
gcloud compute copy-files my-instance-name:/remote/path/to/flow-bck /local/path/to/destination --zone the-instance-region
Above 'my-instance-name', '/remote/path/to', '/local/path/to', and 'the-instance-region' obviously need to changed to your deployment's info, etc.
This copied all the files from the remote instance to a folder called flow-bck on local found at the defined local path. I renamed this folder to what it is on remote flow and then did:
cd flow && npm install
This installed all the needed modules and stuff for MEAN io. Now the important part about this is you have to kill your remote ssh connection so that you can start running the local version of the app, because the ssh tunnel will be using that same port (3000) already, unless you changed it when you tunneled in.
Then in my local app directory flow I ran gulp to start the local version of the app on port 3000. So it loads up and runs just fine. I needed to create a new user as it's obviously not the same database.
Also I know this is basic stuff, but not too long ago I would have forgotten to start mongodb process by running mongod beforehand. In any case, mongo must be running before you can start the app locally.
Now the two things I haven't done yet, is editing and deploying a new version based on this... and the nagging question of whether this is all even necessary. That'd be great to find that this is all done with a few simple commands.

cap deploy:setup creates the release folder with root as owner

I am using capistrano to deplay my rails application on a Ubuntu server.
I already logged into the server and created a folder /webapps/myapp, but no sub folders from here.
Then I run
cap deploy:setup
No errors so far, so i run
cap deploy:setup
Now I get this message
You do not have permissions to write to /webapps/myapp/releases
I can get around this by logging in to the server and change the owner of releases, I just wonder why it is not created with the user I use for deploying? Is this how it work or am I missing something?
In your deploy.rb file you should specify the deployment user and if he has sudo privilege.
set :user, "william"
set :use_sudo, false
Giving sudo privilege isn't recommended, but this option exists.
The directory to which you deploy should be already owned by the deployment user "william"

capistrano deployment with use_sudo=true - permissions problem

i am trying to do a deployment with capistrano to newly installed Ubuntu server
i am deploying to directory /var/www, owned by root, so i need to set use_sudo to true
while i execute commands with run "#{try_sudo} command" without problem, svn checkout doesn't work with sudo prefix
i try
set :deploy_via, :export
and it throws
Can't make directory '/var/www/pr_name/releases/20091217171253': Permission denied
during checkout
i imagine adding "try_sudo" prefix to "svn export" would help, but where can i edit the one it uses in deploy_via?
--
if on other hand i don't use use_sudo, and set /var/www/ directory ownership to myuser, i still cannot deploy - some of my deployment commands set folders ownership to apache user www-data and then i get something like:
changing ownership of `/var/www/pr_name/current/specificdirectory': Operation not permitted
which, if i understand correctly, has to be done with sudo
Using the sudo helper solved the problem.
Here is an example:
run "#{sudo} chown root:root /etc/my.cnf"
Try cap deploy:setup