Update: 07/12/13
The script works through command line.
"------extra line" is to show an extra return key stroke in editor.
XAMPP: 1.8.2
Server: Apache 2.4
Issue:
I keep receiving the error "End of script output before headers: hello.pl" for a simple hello world perl script. I'm trying to execute the script via a web server "xampp".
Curious Note:
I can use another Perl script which will initially work. However when I make a simple change such as a space, return or comment "#", the script will no longer function. However if I remove the change and save it the script will work again.
Check List
Confirm correct path to perl
Output header (see perl code below)
Extra line at end of script (I heard this could resolve issue)
Confirmed correct privileges in httpd.config
Transferred file via ftp in ASCII
Perl Script:
#!"C:\xampp\perl\bin\perl.exe"
print "Content-Type: text/html\n\n";
print "hello world";
------extra line
httpd.config
<Directory "C:/xampp/htdocs">;
Options Indexes FollowSymLinks Includes ExecCGI
AllowOverride All
Require all granted
</Directory>;
Maybe your editor changes the line ending characters to windows one.
CGI output needs to be started with HTTP.., two \n then header, then the body between the right HTML codes (Why doesn't my Perl CGI program work on Windows?)
Check the actual chars in a editor that shows you the line endings (like notepad++).
To my best knowledge, the shebang (#!) line is ignored in windows.
The probable cause:
http://perl.baczynski.com/wtf/solved-mystery-perl-on-xampp-wont-run-modified-scripts
tl;dr: turn off your COMODO antivirus, or it's sandbox feature.
Might be a known PHP bug (https://bugs.php.net/bug.php?id=66474). Try different versions of PHP?
Probably this is SELinux blocks.
try this
setsebool -P httpd_enable_cgi 1
chcon -R -t httpd_sys_script_exec_t cgi-bin/your_script.cgi
Related
I have a program that checks to see if the files in my directory are readable,writeable, and executable.
i have it set up so it looks like
if (-e $file){
print "exists";
}
if (-x $file){
print "executable";
}
and so on
but my issue is when I run it it shows that the text files are executable too. Plain text files with 1 word in them. I feel like there is an error. What did I do wrong. I am a complete perl noob so forgive me.
It is quite possible for a text file to be executable. It might not be particularly useful in many cases, but it's certainly possible.
In Unix (and your Mac is running a Unix-like operating system) the "executable" setting is just a flag that is set in the directory entry for a file. That flag can be set on or off for any file.
There are actually three of these permissions why record if you can read, write or execute a file. You can see these permissions by using the ls -l command in a terminal window (see man ls for more details of what various ls options mean). There are probably ways to view these permissions in the Finder too (perhaps a "properties" menu item or something like that - I don't have a Mac handy to check).
You can change these permissions with the chmod ("change mode") command. See man chmod for details.
For more information about Unix file modes, see this Wikipedia article.
But whether or not a file is executable has nothing at all to do with its contents.
The statement if (-x $file) does not check wether a file is an executable but if your user has execution priveleges on it.
For checking if a file is executable or not, I'm affraid there isn't a magic method for it. You may try to use:
if (-T $file) for checking if the file has an ASCII or UTF-8 enconding.
if (-B $file) for checking if the file is binary.
If this is unsuitable for your case, consider the following:
Assuming you are on a Linux enviroment, note that every file can be executed. The question here is: The execution of e.g.: test.txt, is going to throw a standard error (STDERR)?
Most likely, it will.
If test.txt file contains:
Some text
And you launched it in your Perl script by: system("./test.txt"); This will display a STDERR like:
./test.txt: line 1: Some: command not found
If for some reason you are looking to run all the files of your directory (in a for loop for instance) be warned that this is pretty dangerous, since you will launch all your files and you may not be willing to do so. Specially if the perl script is in the same directory that you are checking (this will lead to undesirable script behaviour).
Hope it helps ;)
Ok, so here's my issue. I have written a build script in bash that pipes output to tee and sorts different output to different log files (so I can summarize errors/warnings at the end and get some statistics on files built). I wanted to use the colorgcc perl script (colorgcc.1.3.2) to colorize the output from gcc and had found in other places that this won't work piping to tee, since the script checks if it is writing to something that is not a tty. Having disabled this check everything was working until I did a full build and discovered some of the code we receive from another group builds C dependency files (we don't control this code, changing it or the build process for these isn't really an option).
The problem is that these .d files have the form as follows:
filename.o filename.d : filename.c \
dependant_file1.h \
dependant_file2.h (and so on for however many dependencies there are)
This output from GCC gets written into the .d file, but, since it is close enough to a warning/error message colorgcc outputs color codes (believe it's the check for filename:lineno:message but not 100% sure, could be filename:message check in the GCCOUT while loop). I've tried editing the regex to attempt to not match this but my perl-fu is admittedly pretty weak. So what I end up with is a color code on each line for these dependency files, which obviously causes the build to fail.
I ended up just replacing the check for ! -t STDOUT with a check for a NO_COLOR envar I set and unset in the build script for these directories (emulates the previous behavior of no color for non-tty). This works great if I run the full script, but doesn't if I cd into the directory and just run make (obviously setting and unsetting manually would work but this is a pain to do every time). Anyone have any ideas how to prevent this script from writing color codes into dependency files?
Here's how I worked around this. I added the following to colorgcc to search the gcc input for the flag to generate the .d files and just directly called the compiler in that case. This was inserted in place of the original TTY check.
for each $argnum (0 .. $#ARGV)
{
if ($ARGV[$argnum] =~ m/-M{1,2}/)
{
exec $compiler, #ARGV
or die("Couldn't exec");
}
}
I don't know if this is the proper 'perl' way of doing this sort of operation but it seems to work. Compiling inside directories that build .d files no longer inserts color codes and the source file builds do (both to terminal and my log files like I wanted). I guess sometimes the answer is more hacks instead of "hey, did you try giving up?".
I have a customized perl module(Modulehere) that take xls sheet and parsing it.
I tried to run that from commandline itself like:
perl -I /home/suser/modules -e "use Modulehere;Modulehere::load_it('/tmp/test.xls')"
But it gives the error like:
Can't open perl script "–e": No such file or directory
Please help!
It works on my machines (OS X and Linux) but looking at the documentation (man perlrun)
-Idirectory
Directories specified by -I are prepended to the search path for modules (#INC).
There is no space between -I and the directory. Maybe your Perl version is being to strict and considering everything after the space as a script file.
I'm confronted with a rather strange problem an echo command causes in a script.
It's supposed to be really REALLY basic stuff, but still, there's something "off".
Suppose, I have this script:
#!/bin/bash
# SERVERPID='cat lite_server_pid.txt'
# kill -9 $SERVERPID
nohup java -Xmx3G -Xms2G -jar tekkit_lite_065.jar nogui > output.txt &
echo $! > lite_server_pid.txt
Yes, this starts my own little Minecraft/Tekkit-Server. ;-)
The Problem is, the file thats created is (for some reason) named
lite_server_pid.txt?
and YES, this includes the "?"! Doing the same command in shell, a file without ? is correctly created! Also, the content of the file is the desired processID.
Still, the ? following the filename is a major problem...
What am I doing wrong?
Check your file for DOS line endings. I suspect that ? is actually your terminal's attempt to display a carriage return (\r). Since bash expects UNIX-style newlines, the carriage return part of the DOS newline (\r\n) is treated as a legal character for the file name.
Run your script through dos2unix.
im using procmail to forward emails to different folders in my Maildir.
I use these two lines to get the FROM and TO from the mail, which works pretty fine.
FROM=`formail -x"From:"`
TO=`formail -x"To:"`
These two commands return the whole line without the From: and To: prefix.
So i get something like:
Firstname Lastname <firstname.lastname#mail-domain.com>
Now i want to extract the email between < and >.
For this i pipe the variable FROM and TO grepping it like this.
FROM_PARSED=`echo $FROM | grep -o '[[:alnum:]+\.\_\-]*#[[:alnum:]+\.\_\-]*'`
TO_PARSED=`echo $TO | grep -o '[[:alnum:]+\.\_\-]*#[[:alnum:]+\.\_\-]*'`
But when i print FROM_PARSED into the procmail log by using LOG=FROM_PARSED, i get an empty string in FROM_PARSED and TO_PARSED.
But if i run these commands on my console, all works fine. I tried many other grepping methods, using grep, egrep, sed and even cut (cutting < and >). All working on console, but i use it in procmail it just returns nothing.
Is it possible that procmail is not allowed to use grep and sed commands? Something like a chroot?
I dont get any error logs in my procmail log. I just want to extract the valid email address from the FROM and TO line. Extracting with formail works, but parsing it with grep or sed fails, even if expression is correct.
Could somebody help? Maybe i need to setup procmail somehow.
Strange.
I added this to the users .procmailrc file
SHELL=/bin/bash
The users shell was set to /bin/false, which is correct because its a mail user, no ssh access at all.
You should properly quote "$FROM" and "$TO".
You will also need to prefix grep with LC_ALL=POSIX to ensure [:alnum:] will actually match the 26 well-known characters + 10 digits of the English alphabet.
You already solved this, but to answer your actual question, it is possible to run procmail in a chroot, but this is certainly not done by Procmail itself. Sendmail used to come with something called the Sendmail Restricted Shell (originally called rsh but renamed to remsh) which allowed system administrators to chroot the delivery process. But to summarize, this is a feature of the MTA, not of Procmail.