When I cap deploy my Symfony2 project, then log into my server I see that the the dev (app_dev.php) runs ok but the prod version (app.php) does not.
The error is
[Tue Jan 03 14:31:48 2012] [error] [client xxx.xxx.xxx.xxx] PHP Fatal error: Uncaught exception 'RuntimeException' with message 'Failed to write cache file "/var/www/example/prod/releases/20120103202539/app/cache/prod/classes.php".' in /var/www/example/prod/releases/20120103202539/app/bootstrap.php.cache:1079\nStack trace:\n#0 /var/www/example/prod/releases/20120103202539/app/bootstrap.php.cache(1017): Symfony\\Component\\ClassLoader\\ClassCollectionLoader::writeCacheFile('/var/www/example/p...', '<?php ????name...')\n#1 /var/www/example/prod/releases/20120103202539/app/bootstrap.php.cache(682): Symfony\\Component\\ClassLoader\\ClassCollectionLoader::load(Array, '/var/www/example/p...', 'classes', false, false, '.php')\n#2 /var/www/example/prod/releases/20120103202539/web/app.php(10): Symfony\\Component\\HttpKernel\\Kernel->loadClassCache()\n#3 {main}\n thrown in /var/www/example/prod/releases/20120103202539/app/bootstrap.php.cache on line 1079
Looking at the recently deployed cache directory I see:
drwxrwxrwx 4 root root 4096 Jan 3 14:28 .
drwxrwxr-x 5 root root 4096 Jan 3 14:28 ..
drwxr-xr-x 6 www-data www-data 4096 Jan 3 14:28 dev
drwxrwxr-x 7 root root 4096 Jan 3 14:28 prod
I can fix the issue with chown -R www-data.www-data prod/ but I wondered if I can stop this from happening in the first place? And why do the directories have different owners?
This happens because your web-server is running by user, who is not able to write to just created cache/prod directory.
There are two solutions, which I know and use. First, add extra commands to run after deployment to Capfile. Capfile will like this:
load 'deploy' if respond_to?(:namespace) # cap2 differentiator
Dir['vendor/bundles/*/*/recipes/*.rb'].each { |bundle| load(bundle) }
load Gem.find_files('symfony2.rb').last.to_s
after "deploy:finalize_update" do
run "sudo chown -R www-data:www-data #{latest_release}/#{cache_path}"
run "sudo chown -R www-data:www-data #{latest_release}/#{log_path}"
run "sudo chmod -R 777 #{latest_release}/#{cache_path}"
end
load 'app/config/deploy'
Second solution is more elegant. You specify correct user, who can write to cache in deploy.rb and make sure that you don't use sudo:
set :user, "anton"
set :use_sudo, false
In the last version of capifony, they've added the option to set writable directories.
Here's the official article which explains what I've written below : http://capifony.org/cookbook/set-permissions.html
You have to deploy using sudo (not a good practice, but it gets the job done)
set :use_sudo, false
# To prompt the sudo password
default_run_options[:pty] = true
and tell capifony which files to make cache and logs folder writable :
set :writable_dirs, ["app/cache", "app/logs"]
set :webserver_user, "www-data"
set :permission_method, :acl
(you have to install acl on your machine, or use :chwon instead of :acl)
EDIT :
I've just realized that this is not enough, the "set_permissions" task is not automatically called, so you have to explicitly run
cap deploy:set_permissions
Or add this line in your deploy.rb :
before "deploy:restart", "deploy:set_permissions"
I solved this problem by adding cache folder to shared folders.
set :shared_children, [app_path + "/cache", app_path + "/logs", web_path + "/uploads", "vendor"]
This way the directory is not recreated each time during deployment, so there is no problem with permissions.
Yes, don't need recreate cache every time after deploy, this solution is logical and pragmatical.
Second solution from Anton - is work if you cache folder permission true in develop environment
Related
We had some 12 agents (vsts-agent-linux-x64-2.188.4) running on one Az VM (Ubuntu 20.04.2 LTS) as processes (./config.sh && screen ./run.sh). All was well..
I had to run some command related to /tmp folder but it kept showing busy and we suspected that our Agents might be using /tmp. Unfortunately instead of any other clean way of stopping the agents, we killed all processes on this VM manually, including the agents'.
After the /tmp related command ran successfully, I tried running screen ./run.sh from one of the agent directories. And I got an error:
Failed to create CoreCLR, HRESULT: 0x80004005
I also had tried :
.agent2/run.sh and I got the error :
ldd: ./bin/libcoreclr.so: No such file or directory
ldd: ./bin/System.Security.Cryptography.Native.OpenSsl.so: No such file or directory
ldd: ./bin/System.IO.Compression.Native.so: No such file or directory
ldd: ./bin/System.Net.Http.Native.so: No such file or directory
Failed to create CoreCLR, HRESULT: 0x80004005
I even downloaded a new .tar for the agent and ran a fresh ./config . But I get the same error on ./config as well
Is there a solution to this? Please help
export COMPlus_EnableDiagnostics = 0, and then running ./config from the agent directory. worked!
I had this issue when running as the non-privileged user specified in the systemd file but running as root user worked fine.
Finally used:
strace -f -o trace.log /<executable path>/<executable name>
Which led me to:
9183 mknod("/tmp/clr-debug-pipe-9183-8112345738-in", S_IFIFO|0700) = -1 EACCES (Permission denied)
This caused me to compare the /tmp directory between working and non-working boxes.
[<not-working-hostname>]$ ll /
drwxrwxr-x 7 root root 93 Jan 5 21:37 tmp
[<working-hostname>]$ ll /
drwxrwxrwt 7 root root 93 Jan 5 21:59 tmp
(Note the r-x vs rwt)
Fix:
[<hostname>]# chmod 1777 /tmp
For anyone else reading this; it seems the issue was caused by permissions and suexec was part of the issue. Having disabled suexec, all is well again (subject to consequential issues I may find later).
Two files I have in (say) dir1, in /cgi-bin/dashboard-login/ and they use CGI::Session to manage the session.
Both files set a new session like this:
my $session = new CGI::Session(undef, $cgi, {Directory=>"$sessions_dir_location"}) or die CGI::Session->errstr;
This means the second file is actually opening the session created by file1. All good so far.
File 3 is in the same sub-domain but in a different dir (/cgi-bin/dashboard/). It also runs that session string but I get the following error:
Software error:
new(): failed: load(): couldn't retrieve data: retrieve(): couldn't open '/var/www/vhosts/example.com/sessions_storage/cgisess_fc6c62eee135f6cd418defef4516a59c': Permission denied at index line 38.
For help, please send mail to the webmaster (root#localhost), giving this error message and the time and date of the error.
In Filezilla, I see that the file permission is set to "dfr (0640)" for the latest session file but, the previous one has the permissions "adfr (0640)" That adfr file can be opened in filezilla and didn't have any issues when I ran my scripts. Now the session files are being created as "dfr (0640)". IS there a way to set the server (or the CGI::Session), to apply "adfr (0640) permissions?
And, in your experience, is that the likely cause of the problem?
Here you go Håkon Hægland
ls -l /var/www/vhosts/myDomain.com/sessions_storage
-rw-r-----. 1 MyUserName psacln 166 Jan 26 01:22 cgisess_0741489d1010b7ab36f86420e5c58e84
-rw-r-----. 1 apache apache 1769 Jan 26 12:35 cgisess_2d475576f960f6c5407d7a273c02ead1
ls -l /var/www/vhosts/domainName.com/subDomain.myDomain.com/cgi-bin/dashboard-login
-rwxr-xr-x. 1 MyUserName psacln 30628 Jan 26 01:46 login.pl
-rwxr-xr-x. 1 MyUserName psacln 48391 Jan 26 00:49 login-with-pin.pl
ls -l /var/www/vhosts/domainName.com/subDomain.myDomain.com/cgi-bin/dashboard
-rwxr-xr-x. 1 MyUserName psacln 40742 Jan 24 17:47 web_content_manager
For anyone else reading this, it was a permissions issue. It seems to relate to SuExec. Having disabled SuExec, temporarily, until I learn more about directory locations and permissions fully, all is well again.
I know there are multiple version of this question on SO, I've tried the solutions posted on those threads and they don't seem to help :(
I have VS Code installed in an Ubuntu VM. I can't seem to get the python linter to work. i.e. I get a message saying
Linter pylint is not installed
I am pretty sure pylint is installed on the VM because when I run which pylint I have a valied output.
Here are the outputs for which python and which pylint respectively
/usr/bin/python
/home/rakshak/.local/bin/pylint
And I have the following in my User settings and workspace settings in VS Code
// Place your settings in this file to overwrite the default settings
{
"python.linting.pylintEnabled": true,
"python.linting.pylintPath": "/home/rakshak/.local/bin/pylint",
"python.pythonPath": "/usr/bin/python"
}
So, turns out this was just a permissions issue!
When I got the pylint not installed message, I was presented with a button to "Install pylint" this runs
sudo pip install pylint
This changed the owner of my .local/lib/ to root and made it inaccessible to vscode.
Output of ls -ld ~/.local/lib/ was
drwx------ 3 root root 4096 Sep 24 10:49 /home/userName/.local/lib/
Runing chown with my group and user fixed this issue.
sudo chown -R group:user ~/.local
now the output of ls -ld ~/.local/lib/ reads
drwx------ 3 userGroup userName 4096 Sep 24 10:49
/home/rakshak/.local/lib/
Have you checked with which python version you have installed pylint?
If you have used python 3.6 then the setting has to be like this:
"python.pythonPath": "/usr/bin/python3.6"
I've got a chrome os app crashing regularly and causing all other chrome processes to crash as well.
I'm seeing crash reports in chrome://crashes, but no way to see the details of the report. I also can't find any minidump files to analyse.
What is the way to get crash report internals in chrome os?
Try the solution in this SO post.
root#localhost:-$ mkdir /tmp/misc && chmod 777 /tmp/misc
root#localhost:-$ cd /tmp
root#localhost:-$ watch -n 1 'find . -mmin -1 -exec cp {} /tmp/misc/ \;'
Then, as a regular user (not root):
google-chrome --enable-logging --v=1
Once you see files created by the watch command, run:
root#localhost:-$ ls -l
-rw------- 1 root root 230432 Apr 16 09:06 chromium-renderer-minidump-2113a256de381bce.dmp
-rw------- 1 root root 230264 Apr 16 09:12 chromium-renderer-minidump-95889ebac3d8ac81.dmp
-rw------- 1 root root 231264 Apr 16 09:13 chromium-renderer-minidump-da0752adcba4e7ca.dmp
-rw------- 1 root root 236246 Apr 16 09:12 chromium-upload-56dc27ccc3570a10
-rw------- 1 root root 237247 Apr 16 09:13 chromium-upload-5cebb028232dd944
Now you can use breakpad to work on the *.dmp files.
You need to be in dev mode in order to access the crash reports. There's no way otherwise to access where the crashes are saved (system crashes under /var/spool/crash and browser/user crashes under /home/chronos/*/crash/).
However, if you're using official Chrome OS builds, we don't currently publish the symbols for the binaries, so it'll probably be a bit difficult to debug using those minidumps.
I use putty to log in to a solaris server. while i was performing a copy operation I pressed left arrow key to edit the file name but it kept adding this character ^[[D desperate I pressed return key and the copy operation got complete
cp temp.jar temp.jar^[[D^[[D^[[D^[[D^[[D^[[D^[[D^[[D^[[D
I was planning to rename is as temp.jar.test, I used 'ls' command to check what has happened and to my surprise two files came up with same name!
root[dev1]# ls -lt temp*
-rw-r--r-- 1 root other 488554 Apr 11 02:25 temp.jar
-rw-r--r-- 1 root other 488554 Apr 11 02:22 temp.jar
-rw-r--r-- 1 root other 488554 Apr 11 02:22 temp.jar.041114
-rw-r--r-- 1 root other 488487 Sep 30 2013 temp.jar.032514
and I used 'rm' command to delete, the original file got deleted but the file copied with ^[[D character is not getting deleted. And I'm getting a msg like 'eisvr.jar.: No such file or directory'
Help me delete the file. I tried issuing 'rm temp.jar^[[D^[[D^[[D^[[D^[[D^[[D^[[D^[[D^[[D'. It resulted in more errors.
The simplest way would be to run this command:
rm -i temp.jar?????????*
and answer yes when prompted to remove the bogus one.