Pipe Multiple Strings in Sh Script - sh

Trying to pipe multiple strings into a file similar to the following:
#!/bin/sh
echo "First name: ";
read ANSWER1;
echo "Last name: ";
read ANSWER2;
echo $ANSWER1 ANSWER2;
Wanting to be able to pipe in values like or similar (I don't want to be updating the sh script) and get the following result:
$ echo "Bugs"; echo "Bunny" | scriptName
Bugs Bunny

You need a command group:
{ echo "Bugs"; echo "Bunny"; } | scriptName
Each command inside the { ... } inherits its standard output from the brace group, whose output is the pipe.

For multiline input, use a heredoc instead of multiple echos:
cat << EOF | scriptName
Bugs
Bunny
EOF

Related

csh: set: No match

I define a function, an array and a variable:
set fnctn = "F(x)=Vx1*(1+cos(1*x-pi))"
set Vx = ( 1 1 1 1 )
set Vx1 = $Vx[1]
The following commands do what I want:
echo "$fnctn" | sed "s/Vx1/$Vx1/"
set fnctn2 = `echo "$fnctn" | sed "s/Vx1/$Vx1/"`
echo "$fnctn2"
or even:
echo "$fnctn" | sed "s/Vx1/$Vx[1]/"
But storing the answer to the later command in a variable such as:
set fnctn2 = `echo "$fnctn" | sed "s/Vx1/$Vx[1]/"`
reports the following error message:
set: No match.
Where is the trick?
ps: please do not suggest me to switch to bash :-) -
Because of the square brackets, csh interprets the word as a pattern and tries to do filename substitution ("globbing") on it. Since you don't have any files with names that match that "pattern", it tells you that it can't find a match.
Just inhibit filename substitution like this:
set noglob
before you attempt the assignment.
The catch here is that for $Vx[1], filename substitution is for some reason attempted twice: apparently, first on evaluation of the variable, then on the evaluation of the result of the command substitution. While for $Vx1, it's only attempted once, on variable substitution:
> ls
f1 f2 f3
> echo *
f1 f2 f3
> set v=("*" "?2")
> set v1="$v[1]"
> set echo=1
> echo `echo ${v1}`
echo `echo ${v1}`
echo *
f1 f2 f3
> echo `echo "${v1}"`
echo `echo "${v1}"`
echo *
*
> echo "${v[1]}"
echo *
*
> echo `echo "${v[1]}"`
echo `echo "${v[1]}"`
echo *
f1 f2 f3
My guess about the reason is because array indices are also subject of variable substitution, $Vx[1] is marked "substitute twice" or something, and the resulting "*" has "one substitution left yet". The man page doesn't say anything relevant, so if it's by design, the link is too subtle for me. Though it is definitely a side effect of the existing implementation, whatever it is. This is a bug in my book -- at least, the fact that this behavior is not documented.
The way around that I've found is to quote the command substitution clause. Now, escaping the quotes inside with a backslash doesn't work reliably and is prone to giving parsing errors depending on the expression inside. The way that worked for me in this case was to use single quotes inside:
> echo "`echo '$fnctn' | sed 's/Vx1/$Vx[1]/'`"
echo `echo 'F(x)=Vx1*(1+cos(1*x-pi))' | sed 's/Vx1/1/'`
sed s/Vx1/1/
echo F(x)=Vx1*(1+cos(1*x-pi))
F(x)=1*(1+cos(1*x-pi))
This is just one of the examples of csh's poor/unpolished design that causes people to recommend against using it.

How to print some free text in addition to SED extract

Well-known SED command to extract a first line and print to another file
sed -n '1 p' /p/raw.txt | cat >> /p/001.txt ;
gives an output in /p/001.txt like
John Doe
But how to modify this command above to add some free text and have, for example, the output like
Name: John Doe
Thanks for any hint to try.
You can do that in a single command (and no sub-shells):
sed 's/^/Name: /;q' /p/raw.txt >> /p/001.txt
This prefixes "Name: " in front of the first line, prints it, then quits so you don't process additional lines. Add a line number before the q to print all lines up to (and including) that number. The output is appended to /p/001.txt just like your original code.
If you want a range of lines:
sed -n '3,9{s/^/Name: /;p}9q' /p/raw.txt >> /p/001.txt
This reads from lines 3-9, performs the substitution, prints, then quits after line 9.
If you want specific lines, I recommend awk:
awk 'NR==3 || NR==9 { print "Name: " $0 } NR>=9 { exit }' /p/raw.txt >> /p/001.txt
This has two clauses. One says the number of record (line number) is either 3 or 9, in which case we print the prefix and the line. The other tells us to stop reading the file after the 9th record.
Here are two more commands to show how awk can act on just the first line(s) or a given range:
awk '{ print "Name: " $0 } NR >= 1 { exit }' /p/raw.txt >> /p/001.txt
awk '3 <= NR { print "Name: " $0 } NR >= 9 { exit }' /p/raw.txt >> /p/001.txt
It appears you're continuously building one file from the other. Consider:
tail -Fn0 /p/raw.txt |sed 's/^/Name: /' >> /p/001.txt
This will run continuously, adding only new entries (added after the command is run) to /p/001.txt
Perhaps you have lots of duplicates to resolve?
awk 'NR != FNR { $0 = "Name: " $0 } !s[$0]++' \
/p/001.txt /p/raw.txt > /tmp/001.txt && mv /tmp/001.txt /p/001.txt
This folds together the previously saved names with any new names, printing names only once (!s[$0]++ is true when s[$0] is zero (its default state), but after the evaluation, it increments to one, making it false on the second occurrence. When a bare clause has no action, the line is printed.) Because we're reading the output file, we need a temporary output. Upon its successful completion, we then move it atop the target output file.
printf "Name : %s\n" "$(sed -n '1p;q' /p/raw.txt)" >/p/001.txt
should do it. If sed is not a requirement do
echo -e "Name : $(sed -n '1p;q' /p/raw.txt)" >/p/001.txt
Note
The q option with the sed quits it without processing any more commands or input.
The -e option tells echo to interpret escape sequences. This is a peculiarity of bash shell.

How to remove YAML frontmatter from markdown files?

I have markdown files that contain YAML frontmatter metadata, like this:
---
title: Something Somethingelse
author: Somebody Sometheson
---
But the YAML is of varying widths. Can I use a Posix command like sed to remove that frontmatter when it's at the beginning of a file? Something that just removes everything between --- and ---, inclusive, but also ignores the rest of the file, in case there are ---s elsewhere.
I understand your question to mean that you want to remove the first ----enclosed block if it starts at the first line. In that case,
sed '1 { /^---/ { :a N; /\n---/! ba; d} }' filename
This is:
1 { # in the first line
/^---/ { # if it starts with ---
:a # jump label for looping
N # fetch the next line, append to pattern space
/\n---/! ba; # if the result does not contain \n--- (that is, if the last
# fetched line does not begin with ---), go back to :a
d # then delete the whole thing.
}
}
# otherwise drop off the end here and do the default (print
# the line)
Depending on how you want to handle lines that begin with ---abc or so, you may have to change the patterns a little (perhaps add $ at the end to only match when the whole line is ---). I'm a bit unclear on your precise requirements there.
If you want to remove only the front matter, you could simply run:
sed '1{/^---$/!q;};1,/^---$/d' infile
If the first line doesn't match ---, sed will quit; else it will delete everything from the 1st line up to (and including) the next line matching --- (i.e. the entire front matter).
If you don't mind the "or something" being perl.
Simply print after two instances of "---" have been found:
perl -ne 'if ($i > 1) { print } else { /^---/ && $i++ }' yaml
or a bit shorter if you don't mind abusing ?: for flow control:
perl -ne '$i > 1 ? print : /^---/ && $i++' yaml
Be sure to include -i if you want to replace inline.
you use a bash file, create script.sh and make it executable using chmod +x script.sh and run it ./script.sh.
#!/bin/bash
#folder articles contains a lot of markdown files
files=./articles/*.md
for f in $files;
do
#filename
echo "${f##*/}"
#replace frontmatter title attribute to "title"
sed -i -r 's/^title: (.*)$/title: "\1"/' $f
#...
done
This AWK based solution works for files with and without FrontMatter, doing nothing in the later case.
#!/bin/sh
# Strips YAML FrontMattter from a file (usually Markdown).
# Exit immediately on each error and unset variable;
# see: https://vaneyckt.io/posts/safer_bash_scripts_with_set_euxo_pipefail/
set -Ee
print_help() {
echo "Strips YAML FrontMattter from a file (usually Markdown)."
echo
echo "Usage:"
echo " `basename $0` -h"
echo " `basename $0` --help"
echo " `basename $0` -i <file-with-front-matter>"
echo " `basename $0` --in-place <file-with-front-matter>"
echo " `basename $0` <file-with-front-matter> <file-to-be-without-front-matter>"
}
replace=false
in_file="-"
out_file="/dev/stdout"
if [ -n "$1" ]
then
if [ "$1" = "-h" ] || [ "$1" = "--help" ]
then
print_help
exit 0
elif [ "$1" = "-i" ] || [ "$1" = "--in-place" ]
then
replace=true
in_file="$2"
out_file="$in_file"
else
in_file="$1"
if [ -n "$2" ]
then
out_file="$2"
fi
fi
fi
tmp_out_file="$out_file"
if $replace
then
tmp_out_file="${in_file}_tmp"
fi
awk -e '
BEGIN {
is_first_line=1;
in_fm=0;
}
/^---$/ {
if (is_first_line) {
in_fm=1;
}
}
{
if (! in_fm) {
print $0;
}
}
/^(---|...)$/ {
if (! is_first_line) {
in_fm=0;
}
is_first_line=0;
}
' "$in_file" >> "$tmp_out_file"
if $replace
then
mv "$tmp_out_file" "$out_file"
fi

How to determine if shell command didn't run or produced no output

I am executing some shell commands via a perl script and capturing output, like this,
$commandOutput = `cat /path/to/file | grep "some text"`;
I also check if the command ran successfully or not like this,
if(!$commandOutput)
{
# command not run!
}
else
{
# further processing
}
This usually works and I get the output correctly. The problem is, in some cases, the command itself does not produce any output. For instance, sometimes the text I am trying to grep will not be present in the target file, so no output will be provided as a result. In this case, my script detects this as "command not run", while its not true.
What is the correct way to differentiate between these 2 cases in perl?
you can use this to know whether the command failed or the command return nothing
$val = `cat text.txt | grep -o '[0-9]*'`;
print "command failed" if (!$?);
print "empty string" if(! length($val) );
print "val = $val";
assume that text.txt contain "123ab" from which you want to get number only.
Use $? to check if the command executed successfully: see backticks do not return any value in perl for an example.
If you're not piping to |grep you can check $? for more specific exit status,
my $commandOutput = `grep "some text" /path/to/file`;
if ($? < 0)
{
# command not run!
}
elsif ($? >> 8 > 1)
{
# file not found
}
else
{
# further processing
}

cut off known substring sh

How to cut off known substring from the string in sh?
For example, I have string "http://www.myserver.org/very/very/long/path/mystring"
expression "http://www.myserver.org/very/very/long/path/" is known. How can I get "mystring"?
Thanks.
E.g. using perl:
echo "http://www.myserver.org/very/very/long/path/mystring" | perl -pe 's|^http://www.myserver.org/very/very/long/path/(.*)$|\1|'
E.g. using sed:
echo "http://www.myserver.org/very/very/long/path/mystring" | sed 's|^http://www.myserver.org/very/very/long/path/\(.*\)$|\1|'
E.g. when the search string is held in a variable, here named variable. Use double quotes to expand the variable.
echo "http://www.myserver.org/very/very/long/path/mystring" | sed "s|^${variable}\(.*\)$|\1|"
Tested under /bin/dash
$ S="http://www.myserver.org/very/very/long/path/mystring" && echo ${S##*/}
mystring
where
S is the variable-name
## remove largest prefix pattern
*/ upto the last slash
For further reading, search "##" in man dash
Some more illustrations:
$ S="/mystring/" ; echo ${S##*/}
$ S="/mystring" ; echo ${S##*/}
mystring
$ S="mystring" ; echo ${S##*/}
mystring