I just encountered some strange behavior with Perl 5.16.3 on FreeBSD 9.3-RELEASE-p3. We've got a cron job which runs every five minutes and generates some text status files. I just happened to list the contents of the output directory and saw that the timestamps for some of the files were in the future! The files are created like this:
if (open(OUT, "> $status_file_path")) {
print OUT "$status_info\n";
close OUT;
}
Now, the file handle OUT is used in several places, however it is opened and closed within the same block as shown above. And like I said, out of ten files, only a few had future dates when displayed using ls.
For example, files with the current date had timestamps like 04/02/2015 20:29:46, files with future timestamps were out in November, e.g. 11/10/2015 09:38:41.
What might be going on here?
EDIT
I've got two tests running:
1) a perl script running a loop of 1000 iterations, sleeping a random time up to 10 seconds between iterations, using the open/print/close logic to create an output file and abort the script if the file's modification time is in the future.
2) a cron entry to touch a test file every minute, e.g. touch /home/test/test_file_date_with_cron.txt
TEST RESULTS
Neither of the tests generated output files with a timestamp in the future.
This is scary.
EDIT 2
Here is the filesystem info, the files are written in the /usr directory.
# df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/gpt/gprootfs 2G 133M 1.7G 7% /
devfs 1.0k 1.0k 0B 100% /dev
/dev/gpt/gpusrfs 431G 3.8G 392G 1% /usr
procfs 4.0k 4.0k 0B 100% /proc
EDIT 3
Running the script outside of cron for several hundred iterations didn't duplicate the problem. HOWEVER, I just found some other files, which are created by a CGI script which have the future dates:
-rw-r--r-- 1 test test 5783 Nov 10 2015 Config.xml_20150210_104151
-rw-r--r-- 1 test test 34548 Nov 10 2015 Config2.xml_20150210_104151
-rw-r--r-- 1 test test 6105 Nov 10 2015 Config.xml_20151109_232210
-rw-r--r-- 1 test test 34554 Nov 10 2015 Config2.xml_20151109_232210
-rw-rw-r-- 1 root test 2075 Nov 9 2015 Config.xml_20151109_231055
-rw-rw-r-- 1 root test 1232 Nov 9 2015 Config2.xml_20151109_231055
These are archive files, which get moved and renamed with the file's mtime timestamp. Note that BOTH ls and Perl's stat() function report the future date -- stat() is used to generate the file's timestamp portion of the name.
Looking at the first entry, ls reports "Nov 10 2015", whereas when the CGI script processed it, Perl's stat() reported "20150210_104151", i.e. "Feb 02 2015" which is most likely correct.
Further down, we see ls showing "Nov 10 2015" and stat() reported "20151109_232210", i.e. "Nov 09 2015".
Finding those additional archived config files helped me track down the cause, which was as others have suggested, that the system date and timezone changed.
From: 1447147328 and America/Adak
To: 1426637771 and America/New_York
What was throwing me off, was that I thought the cron script wrote ALL of the output files each time it executes, but that's not the case. The files have different "refresh intervals".
Related
For anyone else reading this; it seems the issue was caused by permissions and suexec was part of the issue. Having disabled suexec, all is well again (subject to consequential issues I may find later).
Two files I have in (say) dir1, in /cgi-bin/dashboard-login/ and they use CGI::Session to manage the session.
Both files set a new session like this:
my $session = new CGI::Session(undef, $cgi, {Directory=>"$sessions_dir_location"}) or die CGI::Session->errstr;
This means the second file is actually opening the session created by file1. All good so far.
File 3 is in the same sub-domain but in a different dir (/cgi-bin/dashboard/). It also runs that session string but I get the following error:
Software error:
new(): failed: load(): couldn't retrieve data: retrieve(): couldn't open '/var/www/vhosts/example.com/sessions_storage/cgisess_fc6c62eee135f6cd418defef4516a59c': Permission denied at index line 38.
For help, please send mail to the webmaster (root#localhost), giving this error message and the time and date of the error.
In Filezilla, I see that the file permission is set to "dfr (0640)" for the latest session file but, the previous one has the permissions "adfr (0640)" That adfr file can be opened in filezilla and didn't have any issues when I ran my scripts. Now the session files are being created as "dfr (0640)". IS there a way to set the server (or the CGI::Session), to apply "adfr (0640) permissions?
And, in your experience, is that the likely cause of the problem?
Here you go Håkon Hægland
ls -l /var/www/vhosts/myDomain.com/sessions_storage
-rw-r-----. 1 MyUserName psacln 166 Jan 26 01:22 cgisess_0741489d1010b7ab36f86420e5c58e84
-rw-r-----. 1 apache apache 1769 Jan 26 12:35 cgisess_2d475576f960f6c5407d7a273c02ead1
ls -l /var/www/vhosts/domainName.com/subDomain.myDomain.com/cgi-bin/dashboard-login
-rwxr-xr-x. 1 MyUserName psacln 30628 Jan 26 01:46 login.pl
-rwxr-xr-x. 1 MyUserName psacln 48391 Jan 26 00:49 login-with-pin.pl
ls -l /var/www/vhosts/domainName.com/subDomain.myDomain.com/cgi-bin/dashboard
-rwxr-xr-x. 1 MyUserName psacln 40742 Jan 24 17:47 web_content_manager
For anyone else reading this, it was a permissions issue. It seems to relate to SuExec. Having disabled SuExec, temporarily, until I learn more about directory locations and permissions fully, all is well again.
I'm adding binaries to a release in github by dragging and dropping them into the binaries upload section when creating a new release. The binaries have the following permissions on my local (OSX):
-rwxr-xr-x 1 user group 100 Mar 22 00:00 file1
-rwxr-xr-x 1 user group 100 Mar 22 00:00 file2
-rwxr-xr-x 1 user group 100 Mar 22 00:00 file3
-rwxr-xr-x 1 user group 100 Mar 22 00:00 file4
However when I download the binary from Releases the file mode has changed:
-rw-r--r--# 1 user group 100 Mar 22 09:00 file1
Has this been documented anywhere? Is there a way to preserve file permissions when uploading binaries to github?
Is there a way to preserve file permissions when uploading binaries to github?
I don't believe so. People that download the file will need to chmod +x to get the execute permission back. A file's permission is not stored within the file itself, rather it is an attribute of the file on the file system.
If you really need to preserve complex permissions for files, I would suggest storing the files in a container that preserve permissions. Like a DMG for macOS, and uploading the DMG instead.
I want to run rdiff-backup and then switch of the raspberrypi it was running on.
I use the following script:
#!/bin/sh
date > /home/mik/rdiff-backup.log
echo "rsync start" >> /home/mik/rdiff-backup.log
rdiff-backup -v5 --print-statistics offlinebackup#server::/srv/backup /srv/datenserverBackup/backup >> /home/mik/rdiff-backup.log 2>&1
sync
date >> /home/mik/rdiff-backup.log
echo "rdiff-backup end" >> /home/mik/rdiff-backup.log
df -h >> /home/mik/rdiff-backup.log
sync
halt
The log file looks good (for the rdiff-backup part):
Sat 12 Aug 08:20:59 UTC 2017
rsync start
Unable to import win32security module. Windows ACLs
not supported by filesystem at /srv/backup
escape_dos_devices not required by filesystem at /srv/backup
Warning: name offlinebackup not found on system, dropping ACL entry.
Further ACL entries dropped with this name will not trigger further warnings
Using rdiff-backup version 1.2.8
Executing ssh -C offlinebackup#server rdiff-backup --server
-----------------------------------------------------------------
Detected abilities for source (read only) file system:
Access control lists On
Extended attributes On
Windows access control lists Off
Case sensitivity On
Escape DOS devices Off
Escape trailing spaces Off
Mac OS X style resource forks Off
Mac OS X Finder information Off
-----------------------------------------------------------------
Unable to import win32security module. Windows ACLs
not supported by filesystem at /srv/datenserverBackup/backup/rdiff-backup-data/rdiff-backup.tmp.0
escape_dos_devices not required by filesystem at /srv/datenserverBackup/backup/rdiff-backup-data/rdiff-backup.tmp.0
-----------------------------------------------------------------
Detected abilities for destination (read/write) file system:
Ownership changing On
Hard linking On
fsync() directories On
Directory inc permissions On
High-bit permissions On
Symlink permissions Off
Extended filenames On
Windows reserved filenames Off
Access control lists On
Extended attributes On
Windows access control lists Off
Case sensitivity On
Escape DOS devices Off
Escape trailing spaces Off
Mac OS X style resource forks Off
Mac OS X Finder information Off
-----------------------------------------------------------------
Backup: must_escape_dos_devices = 0
Starting increment operation /srv/backup to /srv/datenserverBackup/backup
Processing changed file .
Incrementing mirror file /srv/datenserverBackup/backup
Processing changed file abc
Incrementing mirror file /srv/datenserverBackup/backup/abc
Processing changed file abc/def
Incrementing mirror file /srv/datenserverBackup/backup/abc/def
Processing changed file abc/def/testfile.dxf
Incrementing mirror file /srv/datenserverBackup/backup/abc/def/testfile.dxf
--------------[ Session statistics ]--------------
StartTime 1502526061.00 (Sat Aug 12 08:21:01 2017)
EndTime 1502527913.72 (Sat Aug 12 08:51:53 2017)
ElapsedTime 1852.72 (30 minutes 52.72 seconds)
SourceFiles 151099
SourceFileSize 386321558216 (360 GB)
MirrorFiles 151097
MirrorFileSize 386321447731 (360 GB)
NewFiles 2
NewFileSize 110485 (108 KB)
DeletedFiles 0
DeletedFileSize 0 (0 bytes)
ChangedFiles 1
ChangedSourceSize 0 (0 bytes)
ChangedMirrorSize 0 (0 bytes)
IncrementFiles 4
IncrementFileSize 0 (0 bytes)
TotalDestinationSizeChange 110485 (108 KB)
Errors 0
--------------------------------------------------
The backup is working, but then the script ends right there.
rdiff-backup.log contains the full report of rdiff-backup. But neither the line "rdiff-backup end", nor the output of "df -h".
How can I make it ran to the end?
Thanks for your answers
I finally found a workaround, that solves my problem.
My sciprt which is called after booting from /etc/init.d is calling the other script which does the actual work (i.e. backup my data, and write the log file) as a background task.
/etc/init.d/CallAfterBoot.sh
#!/bin/sh
sleep 30
/home/me/DoBackup.sh & # '&' starts the script in background
/home/me/DoBackup.sh is the script I posted above which is now runing correctly.
Same script running as the same user now behaves differently. There's got to be some bug somewhere, however, it works for me now.
I use putty to log in to a solaris server. while i was performing a copy operation I pressed left arrow key to edit the file name but it kept adding this character ^[[D desperate I pressed return key and the copy operation got complete
cp temp.jar temp.jar^[[D^[[D^[[D^[[D^[[D^[[D^[[D^[[D^[[D
I was planning to rename is as temp.jar.test, I used 'ls' command to check what has happened and to my surprise two files came up with same name!
root[dev1]# ls -lt temp*
-rw-r--r-- 1 root other 488554 Apr 11 02:25 temp.jar
-rw-r--r-- 1 root other 488554 Apr 11 02:22 temp.jar
-rw-r--r-- 1 root other 488554 Apr 11 02:22 temp.jar.041114
-rw-r--r-- 1 root other 488487 Sep 30 2013 temp.jar.032514
and I used 'rm' command to delete, the original file got deleted but the file copied with ^[[D character is not getting deleted. And I'm getting a msg like 'eisvr.jar.: No such file or directory'
Help me delete the file. I tried issuing 'rm temp.jar^[[D^[[D^[[D^[[D^[[D^[[D^[[D^[[D^[[D'. It resulted in more errors.
The simplest way would be to run this command:
rm -i temp.jar?????????*
and answer yes when prompted to remove the bogus one.
I'm simply trying to get a list of filenames given a path with wildcard.
my $path = "/foo/bar/*/*.txt";
my #file_list = glob($path);
foreach $current_file (#file_list) {
print "\n- $current_file";
}
Mostly this works perfectly, but if there's a file greater than 2GB, somewhere in one of the /foo/bar/* subpaths, the glob returns an empty array without any error or warning.
If I remove the file file or add a character/bracket sequence like this:
my $path = "/foo/bar/*[0-9]/*.txt";
or
my $path = "/foo/bar/*1/*.txt";
then the glob works again.
UPDATE:
Here's an example (for a matter of business policy I had to mask the pathname):
[root]/foo/bar # ls -lrt
drwxr-xr-x 2 root system 256 Oct 11 2006 lost+found
drwxr-xr-x 2 root system 256 Dec 27 2007 abc***
drwxr-xr-x 2 root system 256 Nov 12 15:32 cde***
-rw-r--r-- 1 root system 2734193149 Nov 15 05:07 archive1.tar.gz
-rw-r--r-- 1 root system 6913743 Nov 16 05:05 archive2.tar.gz
drwxr-xr-x 2 root system 256 Nov 16 10:00 fgh***
[root]/foo/bar # /home/user/test.pl
[root]/foo/bar #
Removing the >2GB file (or globbing with "/foo/bar/[acf]/" istead of "/foo/bar//")
[root]/foo/bar # ls -lrt
drwxr-xr-x 2 root system 256 Oct 11 2006 lost+found
drwxr-xr-x 2 root system 256 Dec 27 2007 abc***
drwxr-xr-x 2 root system 256 Nov 12 15:32 cde***
-rw-r--r-- 1 root system 6913743 Nov 16 05:05 archive2.tar.gz
drwxr-xr-x 2 root system 256 Nov 16 10:00 fgh***
[root]/foo/bar # /home/user/test.pl
- /foo/bar/abc***/heapdump.phd.gz
- /foo/bar/cde***/javacore.txt.gz
- /foo/bar/fgh***/stuff.txt
[root]/foo/bar #
Any suggestion?
I'm working with:
Perl 5.8.8
Aix 5.3
The filesystem is a local jfs.
In the absence of a proper answer you're going to want a work-around. I'm guessing you've hit some platform-specific bug in the glob() implementation of 5.8.8
I had a quick look at the source on CPAN but my C is too rusty to spot anything useful.
There have been lots of changes to that module though, so a bug may well have been reported and fixed. You're not even on the last release of 5.8 - there's a 5.8.9 out there which mentions updates to AIX compatibility and File::Glob.
I'd test this by installing local::lib if you haven't already and then perhaps cpanm and try updating File::Glob - see what that does. You might need to download the files by hand from e.g. here
If that solves the problem then you can either deploy updates to the required systems, or you'll have to re-implement the bits of glob() you want. Which is going to depend on how complex your patterns get.
If it doesn't solve the problem then at least you'll be able to stick some printf's into the code and see what it's doing.
Hopefully someone will post a real answer and make this redundant about 5 minutes after I click "Post Your Answer" though.
I've never used the new Glob function before, so i cant comment on benefits/problems, but it seems quite a lot of people have had issues using it: see => https://stackoverflow.com/search?q=perl+glob&submit=search for some questions and possible solutions.
IF you don't mind trying out something else:
Here is my tried and tested 'old school' perl solution i have used in countless projects:
my $path = "/foo/bar/";
my #result_array = qx(find $path -iname '*.txt'); #run the system find command
If you - for whatever reason prefer not to run a system command from within your script, then lookup the built in Find::Perl Module instead: http://search.cpan.org/~dom/perl-5.12.5/lib/File/Find.pm
good luck