/bin/dash: strange behavior when matching wildcard * - sh

I want to iterate over all files in given directory. This is my script (simplified):
#!/bin/sh
for F in /tmp/ZZZ/* ; do
echo $F
done
This works as expected, when directory /tmp/ZZZ/ contains files:
$ ./myscript.sh
/tmp/ZZZ/aaa
But the problem is, when directory is empty, instead of not echoing anything, the script echos the literal *:
$ ./myscript.sh
/tmp/ZZZ/*
Why is this happening?
Is there any way change this behavior of the shell ?

Related

Autocomplete directories in a subfolder with the Fish shell

I'm having trouble getting the 'complete' function in the fish shell to behave as I would like and I've been searching for an answer for days now.
Summary
Essentially I need to provide tab directory auto-completion as if I was in a different directory to the one I am currently in. It should behave exactly as 'cd' and 'ls' do, but with the starting point in another directory. It seems like such a trivial thing to be able to do but I can't find a way to make it work.
Explanation
Example folder structure below
- root
- foo
- a
- dir1
- subdir1
- dir2
- subdir2
- b
- dir3
- subdir3
- dir4
- subdir4
I am running these scripts whilst in the 'root' directory, but I need tab auto-complete to behave as if I was in the 'foo' directory.
testfunc -d a/dir2/subdir2
Instead of
testfunc -d foo/a/dir2/subdir2
There are a lot of directories inside 'foo' and a lot of sub-directories within them, and this auto-complete behaviour is necessary to speed our process (this script is used extensively throughout the day).
Attempted Solution
I've tried using the 'complete' builtin to get this working by specifying the directory to use, but all this managed to do was auto-complete the first level of directories with a space after the argument instead of continuing to auto-complete like 'cd' would.
complete -x -c testfunc -a "(__fish_complete_directories ./foo/)"
Working bash version
I have already got this working in Bash and I am trying to port it over to fish. See below for the Bash version.
_testfunc()
{
local cur prev words cword
_init_completion || return
compopt +o default
case $prev in
testfunc)
COMPREPLY=( $( compgen -W '-d' -- "$cur" ) )
compopt +o nospace
return
;;
-d)
curdir=$(pwd)
cd foo/ 2>/dev/null && _filedir -d
COMPREPLY=( $( compgen -d -S / -- "$cur" ) )
cd $curdir
return
;;
esac
} &&
complete -o nospace -F _testfunc testfunc
This is essentially stepping into the folder that I want, doing the autocompletion, then stepping back into the original folder that the script was run in. I was hoping this would be easier in Fish after getting it working in Bash (I need to support these two shells), but I'm just pulling my hair out.
Any help would be really appreciated!
I am not a bash completions expert, but it looks like the bash completions are implemented by changing directories, running completions, and then changing back. You can do the same in fish:
function complete_testfunc
set prevdir $PWD
cd foo
__fish_complete_directories
cd $prevdir
end
complete -x -c testfunc -a "(complete_testfunc)"
does that work for you?

Standalone Shell script working fine but when used is srcs of sh_binary its not working

I have project structure as follows-
PROJECT_STRUCTURE
Now my_shbin.sh is as below -
#!/bin/bash
find ../../ \( -name "*.java" -o -name "*.xml" -o -name "*.html" -o -name "*.js" -o -name "*.css" \) | grep -vE "/node_modules/|/target/|/dist/" >> temp-scan-files.txt
# scan project files for offensive terms
IFS=$'\n'
for file in $(cat temp-scan-files.txt); do
grep -iF -f temp-scan-regex.txt $file >> its-scan-report.txt
done
This script works completely fine when invoked individually and gives required results.But when I add the below sh_binary in my BUILD file I do not see anything in temp-scan-files.txt file and thus nothing in its-scan-report.txt file
sh_binary(
name = "findFiles",
srcs = ["src/test/resources/my_shbin.sh"],
data = glob(["temp-scan-files.txt", "temp-scan-regex.txt", "its-scan-report.txt"]),
)
I ran sh_binary from intellij using the play icon and also tried running it from terminal using bazel run :findFiles. No error is shown but I cannot see data in temp-scan-files.txt.
Any help on this issue.The documentation of bazel is very confined with approx no information whatsoever except the use case.
When a binary is run using bazel run, it's run from the "runfiles tree" for that binary. The runfiles tree is a directory tree that bazel creates that contains symlinks to the binary's inputs. Try putting pwd and tree at the beginning of the shell script to see what this looks like. The reason that the runfiles tree doesn't contain any of the files in src/main is that they're not declared as inputs to the sh_binary (e.g. using the data attribute). See https://docs.bazel.build/versions/master/user-manual.html#run
Another thing to note is that the glob in data = glob(["temp-scan-files.txt", "temp-scan-regex.txt", "its-scan-report.txt"]), won't match anything, because those files are in src/test/resources relative to the BUILD file. However, the script tries to modify these files, and it's not typically possible to modify input files (if this sh_binary were being run as a build action, the inputs would be effectively read-only. This would work only because bazel run is similar to running the final binary by itself outside bazel, e.g. like bazel build //target && bazel-bin/target)
The most straight-forward way to do this might be something like this:
genrule(
name = "gen_report",
srcs = [
# This must be the first element of srcs so that
# the regex file gets passed to the "-f" of grep in cmd below.
"src/test/resources/temp-scan-regex.txt",
] + glob([
"src/main/**/*.java",
"src/main/**/*.xml",
"src/main/**/*.html",
"src/main/**/*.js",
"src/main/**/*.css",
],
exclude = [
"**/node_modules/**",
"**/target/**",
"**/dist/**",
]),
outs = ["its-scan-report.txt"],
# The first element of $(SRCS) will be the regex file, passed to -f.
cmd = "grep -iF -f $(SRCS) > $#",
)
$(SRCS) are the files in srcs delimited by a space, and $# means "the output file, if there's only one". $(SRCS) will contain the temp-scan-regex.txt file, which you probably don't want to include as part of the scan, but if it's the first element, then it will be the parameter to -f. This is maybe a bit hacky and a little fragile, but it's also kind of annoying to try to separate the file out (e.g. using grep or sed or array slicing).
Then bazel build //project/root/myPackage:its-scan-report.txt

How can I make a function run every time cd successfully changes to another directory within sh on FreeBSD?

I'm using sh as my shell on FreeBSD but I want to be able to have a pretty prompt like the one bash gives me on Ubuntu. There are two things that the FreeBSD implementation of sh seems to lack as far as PS1 escape characters go:
The \w works but does not expand $HOME to ~, so this is something I have already hacked up myself
I can use PS1 to update the prompt on the terminal, but as far as I can tell it is not possible to use the PS1 variable to update the title bar as well. ESC and BEL fail to set the title as one would expect if they were using bash or ksh
Here is my .shrc file
update_prompt() {
case "$PWD" in
"$HOME"*)
pretty_pwd="~${PWD#*"${HOME}"}"
;;
"/usr$HOME"*)
pretty_pwd="~${PWD#*"/usr${HOME}"}"
;;
*)
pretty_pwd="$PWD"
;;
esac
case "$TERM" in
xterm*|rxvt*)
PS1="[$USER#\\h $pretty_pwd]\\$ "
;;
*)
;;
esac
printf "\\033]0;[%s#$(hostname -s): %s]\\007" "$USER" "$pretty_pwd"
}
update_prompt
So when I fire up a terminal or log in via ssh, it gives the pretty prompt that I like. But now I need this function to run every time that cd is executed and returns an exit status of 0.
I was going to use an alias that was something like:
alias cd="cd $1 && update_prompt"
but that was before I realized that aliases do not except arguments. How might I go about doing something like this?
You can use a function instead of an alias:
cd() {
command cd "$#" && update_prompt
}
Just put it into ~/.shrc. You have to use command here to let sh know that you are referring to the actual cd builtin command instead of the function you've just defined.
Refer to the sh(1) manual page for the details on how to make sh(1) source the ~/.shrc file when it starts:
Therefore, a user should place commands that are to be executed only at login
time in the .profile file, and commands that are executed for every shell
inside the ENV file. The user can set the ENV variable to some file by placing
the following line in the file .profile in the home directory, substituting for
.shrc the filename desired:
ENV=$HOME/.shrc; export ENV
I use this trick in my cd alias manager. Here's a link to the source code of the function: https://github.com/0mp/goat/blob/v2.5.0/libgoat.sh#L31-L57
You can do it with alias+arguments if you swap the commands:
$ alias cd="echo change; cd"
$ pwd
/nas
$ cd /
change
$ pwd
/
$ cd /etc
change
$ pwd
/etc
$

How can I ensure my autocompleted spaces are fed into my function properly?

I'm using zsh, and am trying to write a function to operate on a URL and a pathname:
function my-function
{
somecommand --url $1 $(readlink -f $2)
}
(to complicate things somewhat, the function actually uses sh syntax, as it is sourced from my ~/.zshrc using a trick like this). The readlink is there to expand symlinks and ensure directories such as . are evaluated correctly (the directory name is stored for later use by somecommand).
When I type a command from the command-line like this:
my-function http://example.org/example /tmp/myexampledirectory
... it works fine, even if I autocomplete the directory name. However, if the directory name contains spaces, zsh completes it like this:
my-function http://example.org/example /tmp/My\ Example\ Directory
For most "normal" commands (cp, mv, etc.) that never seems to cause a problem. However, in my case, somecommand sees $2 as only being /tmp/My - presumably the rest is seen as another argument.
How can I avoid this situation? I would prefer not to alter the standard zsh autocompletion, but rather find a way for my function to handle this.
The zsh completion system works very well here, and the solution is very simple, just put double-quotes around the readlink argument in the script:
somecommand --url $1 $(readlink -f "$2")
The point is that without quotes readlink removes backslashes which escape whitespaces. Compare three results:
1. Without backslashes and quotes readlink -f assumes that there are three different files/directories (with default path in current directory) and produces
$ readlink -f /tmp/My Example Directory
/tmp/My
/home/jimmij/Example
/home/jimmij/Directory
2. With escaping backslashes but without quotes readlink -f understands that there is only one directory, but removes backslashes from output, so that somecommand takes three separate arguments
$ readlink -f /tmp/My\ Example\ Directory
/tmp/My Example Directory
3. With backslashes and with double-quotes readlink -f gives the output with backslashes what is (most probably) expected by somecommand
$ readlink -f "/tmp/My\ Example\ Directory"
/tmp/My\ Example\ Directory
BTW, as a rule of thumb: if there are any problems with whitespaces in the shell-like scripts (bash, zsh, whatever) the first thing to play with is different quotation marks around variables.

Running a script in bash

I have a script in one of my application folders.Usually I just cd into that locatin in Unix box and run the script e.g.
UNIX> cd My\Folder\
My\Folder> MyScript
This prints the required result.
I am not sure how do I do this in Bash script.I have done the following
#!/bin/bash
mydir=My\Folder\
cd $mydir
echo $(pwd)
This basically puts me in the right folder to run the required script . But I am not sure how to run the script in the code?
If you can call MyScript (as opposed to ./MyScript), obviously the current directory (".") is part of your PATH. (Which, by the way, isn't a good idea.)
That means you can call MyScript in your script just like that:
#!/bin/bash
mydir=My/Folder/
cd $mydir
echo $(pwd)
MyScript
As I said, ./MyScript would be better (not as ambiguous). See Michael Wild's comment about directory separators.
Generally speaking, Bash considers everything that does not resolve to a builtin keyword (like if, while, do etc.) as a call to an executable or script (*) located somewhere in your PATH. It will check each directory in the PATH, in turn, for a so-named executable / script, and execute the first one it finds (which might or might not be the MyScript you are intending to run). That's why specifying that you mean the very MyScript in this directory (./) is the better choice.
(*): Unless, of course, there is a function of that name defined.
#!/bin/bash
mydir=My/Folder/
cd $mydir
echo $(pwd)
MyScript
I would rather put the name in quotes. This makes it easier to read and save against mistakes.
#!/bin/bash
mydir="My Folder"
cd "$mydir"
echo $(pwd)
./MyScript
Your nickname says it all ;-)
When a command is entered at the prompt that doesn't contain a /, Bash first checks whether it is a alias or a function. Then it checks whether it is a built-in command, and only then it starts searching on the PATH. This is a shell variable that contains a list of directories to search for commands. It appears that in your case . (i.e. the current directory) is in the PATH, which is generally considered to be a pretty bad idea.
If the command contains a /, no look-up in the PATH is performed. Instead an exact match is required. If starting with a / it is an absolute path, and the file must exist. Otherwise it is a relative path, and the file must exist relative to the current working directory.
So, you have two acceptable options:
Put your script in some directory that is on your PATH. Alternatively, add the directory containing the script to the PATH variable.
Use an absolute or relative path to invoke your script.