unexpected bourne shell script output of case statement - sh

I am trying to find executable files. Trying to use bourne shell /bin/sh for greater portability. Below script echos everything with find: at beginning of string.
#!/bin/sh
DIRS=`find / -perm -4000`
for DIR in "$DIRS"
do
case "$DIR" in
find:*);;
esac
done
QUESTION) Why is it echoing for find:*) when no commands are given?
If i add *) echo "$DIR";; clause to the case statement, it will echo the files that are executable for current user, this is all i really want, but isn't happening (i haven't scripted for /bin/sh really, but this has bewildered me)
Yeah sed, awk, cut can help immensely, but some of these commands most likely will not be available (why aren't they available. because they might not be!) so i thought a bourne shell version is more portable. Maybe there is a better way for /bin/sh substring matching, any ideas?

The lines that you are trying to get rid of presumably look like this:
find: `/root': Permission denied
That's an error message. The command substitution
`find ...`
only captures output, not errors. You need to add a redirection to include the errors:
`find ... 2>&1`
Also, -perm 4000 is the setuid bit, not an executable bit.

You can put find directly in the for loop
for DIR in `find / -perm -4000`

Related

Why Parameter Expansion is not working? [duplicate]

#!/bin/bash
jobname="job_201312161447_0003"
jobname_pre=${jobname:0:16}
jobname_post=${jobname:17}
This bash script gives me Bad substitution error on ubuntu. Any help will be highly appreciated.
The default shell (/bin/sh) under Ubuntu points to dash, not bash.
me#pc:~$ readlink -f $(which sh)
/bin/dash
So if you chmod +x your_script_file.sh and then run it with ./your_script_file.sh, or if you run it with bash your_script_file.sh, it should work fine.
Running it with sh your_script_file.sh will not work because the hashbang line will be ignored and the script will be interpreted by dash, which does not support that string substitution syntax.
I had the same problem. Make sure your script didnt have
#!/bin/sh
at the top of your script. Instead, you should add
#!/bin/bash
For others that arrive here, this exact message will also appear when using the env variable syntax for commands, for example ${which sh} instead of the correct $(which sh)
Your script syntax is valid bash and good.
Possible causes for the failure:
Your bash is not really bash but ksh or some other shell which doesn't understand bash's parameter substitution. Because your script looks fine and works with bash.
Do ls -l /bin/bash and check it's really bash and not sym-linked to some other shell.
If you do have bash on your system, then you may be executing your script the wrong way like: ksh script.sh or sh script.sh (and your default shell is not bash). Since you have proper shebang, if you have bash ./script.sh or bash ./script.sh should be fine.
Try running the script explicitly using bash command rather than just executing it as executable.
Also, make sure you don't have an empty string for the first line of your script.
i.e. make sure #!/bin/bash is the very first line of your script.
Not relevant to your example, but you can also get the Bad substitution error in Bash for any substitution syntax that Bash does not recognize. This could be:
Stray whitespace. E.g. bash -c '${x }'
A typo. E.g. bash -c '${x;-}'
A feature that was added in a later Bash version. E.g. bash -c '${x#Q}' before Bash 4.4.
If you have multiple substitutions in the same expression, Bash may not be very helpful in pinpointing the problematic expression. E.g.:
$ bash -c '"${x } multiline string
$y"'
bash: line 1: ${x } multiline string
$y: bad substitution
Both - bash or dash - work, but the syntax needs to be:
FILENAME=/my/complex/path/name.ext
NEWNAME=${FILENAME%ext}new
I was adding a dollar sign twice in an expression with curly braces in bash:
cp -r $PROJECT_NAME ${$PROJECT_NAME}2
instead of
cp -r $PROJECT_NAME ${PROJECT_NAME}2
I have found that this issue is either caused by the marked answer or you have a line or space before the bash declaration
Looks like "+x" causes problems:
root#raspi1:~# cat > /tmp/btest
#!/bin/bash
jobname="job_201312161447_0003"
jobname_pre=${jobname:0:16}
jobname_post=${jobname:17}
root#raspi1:~# chmod +x /tmp/btest
root#raspi1:~# /tmp/btest
root#raspi1:~# sh -x /tmp/btest
+ jobname=job_201312161447_0003
/tmp/btest: 4: /tmp/btest: Bad substitution
in my case (under ubuntu 18.04), I have mixed $( ${} ) that works fine:
BACKUPED_NB=$(ls ${HOST_BACKUP_DIR}*${CONTAINER_NAME}.backup.sql.gz | wc --lines)
full example here.
I used #!bin/bash as well tried all approaches like no line before or after #!bin/bash.
Then also tried using +x but still didn't work.
Finally i tried running the script ./script.sh it worked fine.
#!/bin/bash
jobname="job_201312161447_0003"
jobname_post=${jobname:17}
root#ip-10-2-250-36:/home/bitnami/python-module/workflow_scripts# sh jaru.sh
jaru.sh: 3: jaru.sh: Bad substitution
root#ip-10-2-250-36:/home/bitnami/python-module/workflow_scripts# ./jaru.sh
root#ip-10-2-250-36:/home/bitnami/python-module/workflow_scripts#

Perl using find in qx

III am writing a Perl script that will need to SSH out to numerous remote servers to perform some gzipping of log files. In the following line, I keep receiving this error and am struggling to determine what's causing this. The error I'm getting is;
bash: -c: line 0: syntax error near unexpected token `('
bash: -c: line 0: `cd /appdata/log/cdmbl/logs/; echo cd /appdata/log/cdmbl/logs/; find . -type f ( -iname '*' ! -iname '*.gz' ) -mmin +1440 ;; exit 0'
And of course, as you can tell by the error, the line I am trying to write is;
my $id = qx{ssh -q $cur_host "cd $log_path; echo cd $log_path; find . -type f \( -iname '*' ! -iname '*.gz' \) -mmin +1440 \;; exit 0"};
Am I overlooking something here that is causing the unexpected token '(' issue I am
receiving?
NOTE: I removed the -exec from find just so I could see if I can get past this issue first.
Thanks.
You need to backslash the parentheses for the shell. Using single backslash in double quotes is not enough, Perl removes the backslash. Use double backslash \\(.
This is probably not going to answer your question, but it's a nice alternative that I would like to propose.
You said you cannot install additional modules on the production servers. You need to run a bunch of stuff where you are looking for files and zipping them. That can all be done in Perl, and you may have more controll over it than through the "doing command line stuff from a Perl script" approach.
Take a look at Object::Remote, which was written for exactly that purpose. It lets you ssh into machines and run Perl stuff there that you have installed on your local machine. That way, you do not need to add modules or install anything on the remote. All it needs is any kind of more or less recent Perl, which fortunately almost every Linux comes with.
There is a very good lightning talk about it by the author Matt Trout that is well worth watching.
If the command you built results in a syntax error, wouldn't the first step be to see what command you built?
print qq{ssh -q $cur_host "cd $log_path; echo cd $log_path; find . -type f \( -iname '*' ! -iname '*.gz' \) -mmin +1440 \;; exit 0"}, "\n";

In Emacs-lisp, what is the correct way to use call-process on an ls command?

I want to execute the following shell command in emacs-lisp:
ls -t ~/org *.txt | head -5
My attempt at the following:
(call-process "ls" nil t nil "-t" "~/org" "*.txt" "| head -5")
results in
ls: ~/org: No such file or directory
ls: *.txt: No such file or directory
ls: |head -5: No such file or directory
Any help would be greatly appreciated.
The problem is that tokens like ~, *, and | aren't processed/expanded by the ls program. Since the tokens aren't processed, ls is look for a file or directory literally called ~/org, a file or directory literally called *.txt, and a file or directory literally called | head -5. Thus the error message you received about `No such file or directory".
Those tokens are processed/expanded by the shell (like Bourne shell /bin/sh or Bash /bin/bash). Technically, interpretation of the tokens can be shell-specific, but most shell interpret at least some of the same standard tokens the same way, e.g. | means connecting programs together end-to-end to almost all shells. As a counterexample, Bourne shell (/bin/sh) does not do ~ tilde/home-directory expansion.
If you want to get the expansions, you have to get your calling program to do the expansion itself like a shell would (hard work) or run your ls command in a shell (much easier):
/bin/bash -c "ls -t ~/org *.txt | head -5"
so
(call-process "/bin/bash" nil t nil "-c" "ls -t ~/org *.txt | head -5")
Edit: Clarified some issues, like mentioning that /bin/sh doesn't do ~ expansion.
Depending on your use case, if you find yourself wanting to execute shell commands and have the output made available in a new buffer frequently, you can also make use of the shell-command feature. In your example, it would look something like this:
(shell-command "ls -t ~/org *.txt | head -5")
To have this inserted into the current buffer, however, would require that you set current-prefix-arg manually using something like (universal-argument), which is a bit of a hack. On the other hand, if you just want the output someplace you can get it and process it, shell-command will work as well as anything else.

How to use multiple files at once using bash

I have a perl script which is used to process some data files from a given directory. I have written below bash script to look for the last updated file in the given directory and process that file.
cd $data_dir
find \( -type f -mtime -1 \) -exec ./script.pl {} \;
Sometimes, user copied multiple files to the data dir and hence the previous one skipped. The perl script execute only the last updated file. Can you please suggest me how to fix this using bash script.
Try
cd $data_dir
find \( -type f -mtime -1 \) -exec ./script.pl {} +
Note the termination of -exec with a + vs your \;
From the man page
-exec command {} +
This variant of the -exec action runs the specified command on the selected files, but the command line is built by appending each selected file name at the end;
Now that you'll have one or more file names passed into your perl script, you can alter your perl script to iterate over each passed in file name.
If I understood the question correctly, you need to process any files that were created or modified in a directory since the last time your script was run.
In my opinion find is not the right tool to determine those files, because it has no notion of which files it has already seen.
Using any of the -atime/-ctime/-mtime options will either produce duplicates if you run your script twice in the specified period, or miss some files if it is not executed at the right time. The timing intricacies of using these options for something like this are not easy to deal with.
I can propose a few alternatives:
a) Use three directories instead of one: incoming/ processing/ done/. Your users should only be allowed to put files in incoming/. You move any files in there to processing/ with a simple mv incoming/* processing/ before running your perl script. Then you move them from processing/ to done/ when its over.
In my opinion this is the simplest and best solution, and the one used by mail servers etc when dealing with this issue. If I were you and there were not any special circumstances preventing you from doing this, I'd stop reading here.
b) Have your finder script touch a special file (e.g. .timestamp, perhaps in a different directory, so that your users will not tamper with it) when it's done. This will allow your script to remember the last time it was run. Then use
find \( -cnewer .timestamp -o -newer .timestamp \) -type f -exec ./script.pl '{}' ';'
to run your perl script for each file. You should modify your perl script so that it can run repeatedly with a different file name each time. If you can modify it to accept multiple files in one go, you can also run it with
find \( -cnewer .timestamp -o -newer .timestamp \) -type f -exec ./script.pl '{}' +
which will minimise the number of ./script.pl processes. Take care to handle the first run of the find script, when the .timestamp file is missing. A good solution would be to simply ignore it by not using the -*newer options at all in that case. Also keep in mind that there is a race condition where files added after find was started but before touching the timestamp file will not be processed.
c) As a variation of (b), have your script update the timestamp with the time of the processed file that was created/modified most recently. This is tricky, because find cannot order its output on its own. You could use a wrapper around your perl script to handle this:
#!/bin/bash
for i in "$#"; do
find "$i" \( -cnewer .timestamp -o -newer .timestamp \) -exec touch -r '{}' .timestamp ';'
done
./script.pl "$#"
This will update the timestamp if it is called to process a file with a newer mtime or ctime, minimising (but not eliminating) the race condition. It is however somewhat awkward - unavoidable since bash's [[ -nt option seems to only check the mtime. It might be better if your perl script handled that on its own.
d) Have your script store each processed filename and its timestamps somewhere and then skip duplicates. That would allow you to just pass all files in the directory to it and let it sort out the mess. Kinda tricky though...
e) Since your are using Linux, you might want to have a look at inotify and the inotify-tools package - specifically the inotifywait tool. With a bit of scripting it would allow you to process files as they are added in the directory:
inotifywait -e MOVED_TO -e CLOSE_WRITE -m -r testd/ | grep --line-buffered -e MOVED_TO -e CLOSE_WRITE | while read d e f; do ./script.pl "$f"; done
This has no race conditions, as long as your users do not create/copy/move any directories rather than just files.
The perl script will only execute against the file which find gives it. Perhaps you should remove the -mtime -1 option from the find command so that it picks up all the files in the directory?

run program multiple times using one line shell command

I have the following gifs on my linux system:
$ find . -name *.gif
./gifs/02.gif17.gif
./gifs/fit_logo_en.gif
./gifs/halloween_eyes_63.gif
./gifs/importing-pcs.gif
./gifs/portal.gif
./gifs/Sunflower_as_gif_small.gif
./gifs/weird.gif
./gifs2/00p5dr69.gif
./gifs2/iss013e48788.gif
...and so on
What I have written is a program that converts GIF files to BMP with the following interface:
./gif2bmp -i inputfile -o outputfile
My question is, is it possible to write a one line command using xargs, awk, find etc. to run my program once for each one of these files? Or do I have to write a shell script with a loop?
For that kind of work, it may be worth looking at find man page, especially the -exec option.
You can write something along the line of:
find . -name *.gif -exec gif2bmp -i {} -o {}.bmp \;
You can play with combinations ofdirname and basename to obtain better naming for the output file, though in this case, I would prefer to use a shell for loop, something like:
for i in `find . -name "*.gif"`; do
DIR=`dirname $i`
NAME=`basename $i .gif`
gif2bmp -i $i -o ${DIR}/${NAME}.bmp
done
Using GNU Parallel you can do:
parallel ./gif2bmp -i {} -o {.}.bmp ::: *.gif
The added benefit is that it will run one job for each cpu core in parallel.
Watch the intro video for a quick introduction: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial (http://www.gnu.org/software/parallel/parallel_tutorial.html). You command line with love you for it.