How to set correct file permissions with CGI::Session - perl

For anyone else reading this; it seems the issue was caused by permissions and suexec was part of the issue. Having disabled suexec, all is well again (subject to consequential issues I may find later).
Two files I have in (say) dir1, in /cgi-bin/dashboard-login/ and they use CGI::Session to manage the session.
Both files set a new session like this:
my $session = new CGI::Session(undef, $cgi, {Directory=>"$sessions_dir_location"}) or die CGI::Session->errstr;
This means the second file is actually opening the session created by file1. All good so far.
File 3 is in the same sub-domain but in a different dir (/cgi-bin/dashboard/). It also runs that session string but I get the following error:
Software error:
new(): failed: load(): couldn't retrieve data: retrieve(): couldn't open '/var/www/vhosts/example.com/sessions_storage/cgisess_fc6c62eee135f6cd418defef4516a59c': Permission denied at index line 38.
For help, please send mail to the webmaster (root#localhost), giving this error message and the time and date of the error.
In Filezilla, I see that the file permission is set to "dfr (0640)" for the latest session file but, the previous one has the permissions "adfr (0640)" That adfr file can be opened in filezilla and didn't have any issues when I ran my scripts. Now the session files are being created as "dfr (0640)". IS there a way to set the server (or the CGI::Session), to apply "adfr (0640) permissions?
And, in your experience, is that the likely cause of the problem?

Here you go Håkon Hægland
ls -l /var/www/vhosts/myDomain.com/sessions_storage
-rw-r-----. 1 MyUserName psacln 166 Jan 26 01:22 cgisess_0741489d1010b7ab36f86420e5c58e84
-rw-r-----. 1 apache apache 1769 Jan 26 12:35 cgisess_2d475576f960f6c5407d7a273c02ead1
ls -l /var/www/vhosts/domainName.com/subDomain.myDomain.com/cgi-bin/dashboard-login
-rwxr-xr-x. 1 MyUserName psacln 30628 Jan 26 01:46 login.pl
-rwxr-xr-x. 1 MyUserName psacln 48391 Jan 26 00:49 login-with-pin.pl
ls -l /var/www/vhosts/domainName.com/subDomain.myDomain.com/cgi-bin/dashboard
-rwxr-xr-x. 1 MyUserName psacln 40742 Jan 24 17:47 web_content_manager
For anyone else reading this, it was a permissions issue. It seems to relate to SuExec. Having disabled SuExec, temporarily, until I learn more about directory locations and permissions fully, all is well again.

Related

Maintain executable file permissions when uploading binaries to a github release

I'm adding binaries to a release in github by dragging and dropping them into the binaries upload section when creating a new release. The binaries have the following permissions on my local (OSX):
-rwxr-xr-x 1 user group 100 Mar 22 00:00 file1
-rwxr-xr-x 1 user group 100 Mar 22 00:00 file2
-rwxr-xr-x 1 user group 100 Mar 22 00:00 file3
-rwxr-xr-x 1 user group 100 Mar 22 00:00 file4
However when I download the binary from Releases the file mode has changed:
-rw-r--r--# 1 user group 100 Mar 22 09:00 file1
Has this been documented anywhere? Is there a way to preserve file permissions when uploading binaries to github?
Is there a way to preserve file permissions when uploading binaries to github?
I don't believe so. People that download the file will need to chmod +x to get the execute permission back. A file's permission is not stored within the file itself, rather it is an attribute of the file on the file system.
If you really need to preserve complex permissions for files, I would suggest storing the files in a container that preserve permissions. Like a DMG for macOS, and uploading the DMG instead.

In Solaris server with Putty, unable to delete copied file with some irregular special character

I use putty to log in to a solaris server. while i was performing a copy operation I pressed left arrow key to edit the file name but it kept adding this character ^[[D desperate I pressed return key and the copy operation got complete
cp temp.jar temp.jar^[[D^[[D^[[D^[[D^[[D^[[D^[[D^[[D^[[D
I was planning to rename is as temp.jar.test, I used 'ls' command to check what has happened and to my surprise two files came up with same name!
root[dev1]# ls -lt temp*
-rw-r--r-- 1 root other 488554 Apr 11 02:25 temp.jar
-rw-r--r-- 1 root other 488554 Apr 11 02:22 temp.jar
-rw-r--r-- 1 root other 488554 Apr 11 02:22 temp.jar.041114
-rw-r--r-- 1 root other 488487 Sep 30 2013 temp.jar.032514
and I used 'rm' command to delete, the original file got deleted but the file copied with ^[[D character is not getting deleted. And I'm getting a msg like 'eisvr.jar.: No such file or directory'
Help me delete the file. I tried issuing 'rm temp.jar^[[D^[[D^[[D^[[D^[[D^[[D^[[D^[[D^[[D'. It resulted in more errors.
The simplest way would be to run this command:
rm -i temp.jar?????????*
and answer yes when prompted to remove the bogus one.

Selinux - File Contexts Look Good, But Selinux Won't Allow Write

I am trying to learn Selinux. With a sandbox and using VSFTPD to experiment with, I have a vsfptd server running in Centos. I have annonmous users to place files in /var/ftp/incoming. On a remote machine I can have the user successfully log in but could not place the file on the remove vsftpd server:
$ftp mysql_server
Connected to mysql_server (192.168.1.31).
220 Welcome to blah FTP service.
Name (mysql_server:root): anonymous
331 Please specify the password.
Password:
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer ftp> put atd
local: atd remote: atd
227 Entering Passive Mode (192,168,1,31,19,161).
553 Could not create file.
ftp>
On the VSFTPD server, aureport -a report shows:
[root#mysql_server ftp]# aureport -a
AVC Report
========================================================
# date time comm subj syscall class permission obj event
========================================================
4. 04/08/2013 13:30:36 vsftpd unconfined_u:system_r:ftpd_t:s0-s0:c0.c1023 21 dir write system_u:object_r:public_content_t:s0 denied 28
5. 04/08/2013 13:34:57 vsftpd unconfined_u:system_r:ftpd_t:s0-s0:c0.c1023 2 dir write system_u:object_r:public_content_t:s0 denied 47
I checked the directory and the file contexts look good, so I don't understand why Selinux won't allow vsftpd to write to the incoming directory:
[root#mysql_server ftp]# ls -Z
drwx-wx---. root ftp system_u:object_r:public_content_t:s0 incoming
drwxr-xr-x. root root system_u:object_r:public_content_t:s0 pub
[root#mysql_server ftp]#
You need to run the following commands to allow in SELinux upload and edit files:
setsebool -P allow_ftpd_full_access on
setsebool -P ftp_home_dir on
Your SELinux type is not correct. Use 'public_content_rw_t' instead of 'public_content_t'. Read more on http://beginlinux.com/blog/2008/11/vsftpd-and-selinux-on-centos/

Perl: strange behaviour of glob with files greater than 2 GB

I'm simply trying to get a list of filenames given a path with wildcard.
my $path = "/foo/bar/*/*.txt";
my #file_list = glob($path);
foreach $current_file (#file_list) {
print "\n- $current_file";
}
Mostly this works perfectly, but if there's a file greater than 2GB, somewhere in one of the /foo/bar/* subpaths, the glob returns an empty array without any error or warning.
If I remove the file file or add a character/bracket sequence like this:
my $path = "/foo/bar/*[0-9]/*.txt";
or
my $path = "/foo/bar/*1/*.txt";
then the glob works again.
UPDATE:
Here's an example (for a matter of business policy I had to mask the pathname):
[root]/foo/bar # ls -lrt
drwxr-xr-x 2 root system 256 Oct 11 2006 lost+found
drwxr-xr-x 2 root system 256 Dec 27 2007 abc***
drwxr-xr-x 2 root system 256 Nov 12 15:32 cde***
-rw-r--r-- 1 root system 2734193149 Nov 15 05:07 archive1.tar.gz
-rw-r--r-- 1 root system 6913743 Nov 16 05:05 archive2.tar.gz
drwxr-xr-x 2 root system 256 Nov 16 10:00 fgh***
[root]/foo/bar # /home/user/test.pl
[root]/foo/bar #
Removing the >2GB file (or globbing with "/foo/bar/[acf]/" istead of "/foo/bar//")
[root]/foo/bar # ls -lrt
drwxr-xr-x 2 root system 256 Oct 11 2006 lost+found
drwxr-xr-x 2 root system 256 Dec 27 2007 abc***
drwxr-xr-x 2 root system 256 Nov 12 15:32 cde***
-rw-r--r-- 1 root system 6913743 Nov 16 05:05 archive2.tar.gz
drwxr-xr-x 2 root system 256 Nov 16 10:00 fgh***
[root]/foo/bar # /home/user/test.pl
- /foo/bar/abc***/heapdump.phd.gz
- /foo/bar/cde***/javacore.txt.gz
- /foo/bar/fgh***/stuff.txt
[root]/foo/bar #
Any suggestion?
I'm working with:
Perl 5.8.8
Aix 5.3
The filesystem is a local jfs.
In the absence of a proper answer you're going to want a work-around. I'm guessing you've hit some platform-specific bug in the glob() implementation of 5.8.8
I had a quick look at the source on CPAN but my C is too rusty to spot anything useful.
There have been lots of changes to that module though, so a bug may well have been reported and fixed. You're not even on the last release of 5.8 - there's a 5.8.9 out there which mentions updates to AIX compatibility and File::Glob.
I'd test this by installing local::lib if you haven't already and then perhaps cpanm and try updating File::Glob - see what that does. You might need to download the files by hand from e.g. here
If that solves the problem then you can either deploy updates to the required systems, or you'll have to re-implement the bits of glob() you want. Which is going to depend on how complex your patterns get.
If it doesn't solve the problem then at least you'll be able to stick some printf's into the code and see what it's doing.
Hopefully someone will post a real answer and make this redundant about 5 minutes after I click "Post Your Answer" though.
I've never used the new Glob function before, so i cant comment on benefits/problems, but it seems quite a lot of people have had issues using it: see => https://stackoverflow.com/search?q=perl+glob&submit=search for some questions and possible solutions.
IF you don't mind trying out something else:
Here is my tried and tested 'old school' perl solution i have used in countless projects:
my $path = "/foo/bar/";
my #result_array = qx(find $path -iname '*.txt'); #run the system find command
If you - for whatever reason prefer not to run a system command from within your script, then lookup the built in Find::Perl Module instead: http://search.cpan.org/~dom/perl-5.12.5/lib/File/Find.pm
good luck

Capifony and directory owners

When I cap deploy my Symfony2 project, then log into my server I see that the the dev (app_dev.php) runs ok but the prod version (app.php) does not.
The error is
[Tue Jan 03 14:31:48 2012] [error] [client xxx.xxx.xxx.xxx] PHP Fatal error: Uncaught exception 'RuntimeException' with message 'Failed to write cache file "/var/www/example/prod/releases/20120103202539/app/cache/prod/classes.php".' in /var/www/example/prod/releases/20120103202539/app/bootstrap.php.cache:1079\nStack trace:\n#0 /var/www/example/prod/releases/20120103202539/app/bootstrap.php.cache(1017): Symfony\\Component\\ClassLoader\\ClassCollectionLoader::writeCacheFile('/var/www/example/p...', '<?php ????name...')\n#1 /var/www/example/prod/releases/20120103202539/app/bootstrap.php.cache(682): Symfony\\Component\\ClassLoader\\ClassCollectionLoader::load(Array, '/var/www/example/p...', 'classes', false, false, '.php')\n#2 /var/www/example/prod/releases/20120103202539/web/app.php(10): Symfony\\Component\\HttpKernel\\Kernel->loadClassCache()\n#3 {main}\n thrown in /var/www/example/prod/releases/20120103202539/app/bootstrap.php.cache on line 1079
Looking at the recently deployed cache directory I see:
drwxrwxrwx 4 root root 4096 Jan 3 14:28 .
drwxrwxr-x 5 root root 4096 Jan 3 14:28 ..
drwxr-xr-x 6 www-data www-data 4096 Jan 3 14:28 dev
drwxrwxr-x 7 root root 4096 Jan 3 14:28 prod
I can fix the issue with chown -R www-data.www-data prod/ but I wondered if I can stop this from happening in the first place? And why do the directories have different owners?
This happens because your web-server is running by user, who is not able to write to just created cache/prod directory.
There are two solutions, which I know and use. First, add extra commands to run after deployment to Capfile. Capfile will like this:
load 'deploy' if respond_to?(:namespace) # cap2 differentiator
Dir['vendor/bundles/*/*/recipes/*.rb'].each { |bundle| load(bundle) }
load Gem.find_files('symfony2.rb').last.to_s
after "deploy:finalize_update" do
run "sudo chown -R www-data:www-data #{latest_release}/#{cache_path}"
run "sudo chown -R www-data:www-data #{latest_release}/#{log_path}"
run "sudo chmod -R 777 #{latest_release}/#{cache_path}"
end
load 'app/config/deploy'
Second solution is more elegant. You specify correct user, who can write to cache in deploy.rb and make sure that you don't use sudo:
set :user, "anton"
set :use_sudo, false
In the last version of capifony, they've added the option to set writable directories.
Here's the official article which explains what I've written below : http://capifony.org/cookbook/set-permissions.html
You have to deploy using sudo (not a good practice, but it gets the job done)
set :use_sudo, false
# To prompt the sudo password
default_run_options[:pty] = true
and tell capifony which files to make cache and logs folder writable :
set :writable_dirs, ["app/cache", "app/logs"]
set :webserver_user, "www-data"
set :permission_method, :acl
(you have to install acl on your machine, or use :chwon instead of :acl)
EDIT :
I've just realized that this is not enough, the "set_permissions" task is not automatically called, so you have to explicitly run
cap deploy:set_permissions
Or add this line in your deploy.rb :
before "deploy:restart", "deploy:set_permissions"
I solved this problem by adding cache folder to shared folders.
set :shared_children, [app_path + "/cache", app_path + "/logs", web_path + "/uploads", "vendor"]
This way the directory is not recreated each time during deployment, so there is no problem with permissions.
Yes, don't need recreate cache every time after deploy, this solution is logical and pragmatical.
Second solution from Anton - is work if you cache folder permission true in develop environment