How to capture single quote when using Perl in CLi? - perl

Suppose I have a text file with content like below:
'Jack', is a boy
'Jenny', is a girl
...
...
...
I'd like to use perl in Cli to only capture the names between pairs of single quotes
cat text| perl -ne 'print $1."\n" if/\'(\w+?)\'/'
Above command was what I ran but it didn't work. It seems like "'" messed up with Shell.
I know we have other options like writing a perl script. But given my circumstances, I'd like to find a way to fulfill this in Shell command line.
Please advise.

The shell has the interesting property of concatenating quoted strings. Or rather, '...' or "..." should not be considered strings, but modifiers for available escapes. The '...'-surrounded parts of a command have no escapes available. Outside of '...', a single quote can be passed as \'. Together with the concatenating property, we can embed a single quote like
$ perl -E'say "'\''";'
'
into the -e code. The first ' exits the no-escape zone, \' is our single quote, and ' re-enters the escapeless zone. What perl saw was
perl // argv[0]
-Esay "'"; // argv[1]
This would make your command
cat text| perl -ne 'print $1."\n" if/'\''(\w+?)'\''/'
(quotes don't need escaping in regexes), or
cat text| perl -ne "print \$1.qq(\n) if/'(\w+?)'/"
(using double quotes to surround the command, but using qq// for double quoted strings and escaping the $ sigil to avoid shell variable interpolation).

Here are some methods that do not require manually escaping the perl statement:
(Disclaimer: I'm not sure how robust these are – they haven't been tested extensively)
Cat-in-the-bag technique
perl -ne "$(cat)" text
You will be prompted for input. To terminate cat, press Ctrl-D.
One shortcoming of this: The perl statement is not reusable. This is addressed by the variation:
$pline=$(cat)
perl -ne "$pline" text
The bash builtin, read
Multiple lines:
read -rd'^[' pline
Single line:
read -r pline
Reads user input into the variable pline.
The meaning of the switches:
-r: stop read from interpreting backslashes (e.g. by default read interprets \w as w)
-d: determines what character ends the read command.
^[ is the character corresponding to Esc, you insert ^[ by pressing Ctrl-V then Esc.
Heredoc and script.
(You said no scripts, but this is quick and dirty, so might as well...)
cat << 'EOF' > scriptonite
print $1 . "\n" if /'(\w+)'/
EOF
then you simply
perl -n scriptonite text

Related

sed not working as expected when trying to replace "user='mysql'" with "user=`whoami`"

The following command fails.
sed 's/user=\'mysql\'/user=`whoami`/g' input_file
An example input_file contains the following line
user='mysql'
The corresponding expected output is
user=`whoami`
(Yes, I literally want whoami between backticks, I don't want it to expand my userid.)
This should be what you need:
Using double quotes to enclose the sed command,
so that you are free to use single quotes in it;
escape backticks to avoid the expansion.
sed "s/user='mysql'/user=\`whoami\`/g" yourfile
I've intentionally omitted the -i option for the simple reason that it is not part of the issue.
To clarify the relation between single quotes and escaping, compare the following two commands
echo 'I didn\'t know'
echo 'I didn'\''t know'
The former will wait for further input as there's an open ', whereas the latter will work fine, as you are concatenating a single quoted string ('I didn'), an escaped single quote (\'), and another single quoted string ('t know').

Issue with sed command in Bash for string replacement

I've been looking for a way to find-and-replace strings in all fortran files in my current directory. Most answers on here are along the lines of using:
sed -i 's/INCLUDE \'atm/params.inc\'/USE params/g' *.f
or
perl -pi -w -e 's/INCLUDE \'atm/params.inc\'/USE params/g' *.f
However, when I use either of these the bash line continuation > pops up on the next line as if it's expecting input or another argument. I haven't seen anyone else encounter this and I am not sure what to do with it. Are my commands incorrect; am I missing something?
There are two problems with the original:
You weren't protecting your / in the literal data from being parsed by sed rather than treated as data. One very readable and explicit way to do this is with [/].
You were trying to use \' to put a literal ' in a single-quoted string. That doesn't work. The common idiom is '"'"', which, character-by-character, does the following:
' - exits the original single-quoted context
" - opens a double-quoted context
' - adds a literal single-quote (protected by the surrounding double quotes)
" - ends that double-quoted context
' - resumes the outer single-quoted context.
Thus, consider:
# note: this works with GNU sed, not MacOS sed or others
sed -i 's/INCLUDE '"'"'atm[/]params.inc'"'"'/USE params/g' *.f

Why is '\t', returned from "backtick" call, expanded to tabulator? What all characters are expanded?

If I run a simple perl script like this (on linux, with bash),
$to_run = q(echo '\t');
$res = `$to_run`;
print $res
I would expect that \t will be printed - that is, the backslash character and "t" character. Indeed, if I run just in bash
echo '\t'
I see \t. However, the perl script prints the tabulator.
Why is the tabulator expanded in $res? What all characters are expanded like that? And, most importantly, how do I stop it from expanding?
Backticks are evaluated using /bin/sh, regardless of whatever shell you may want to use, and it's the POSIX XSI-conformant version of echo implemented by sh that's converting \t to a tab. Try it out yourself by running echo '\t' inside sh.
For avoiding this behavior, trying using printf '%s\n' instead of echo in backticks.

sed rare-delimiter (other than & | / ?...)

I am using the Unix sed command on a string that can contain all types of characters (&, |, !, /, ?, etc).
Is there a complex delimiter (with two characters?) that can fix the error:
sed: -e expression #1, char 22: unknown option to `s'
The characters in the input file are of no concern - sed parses them fine. There may be an issue, however, if you have most of the common characters in your pattern - or if your pattern may not be known beforehand.
At least on GNU sed, you can use a non-printable character that is highly improbable to exist in your pattern as a delimiter. For example, if your shell is Bash:
$ echo '|||' | sed s$'\001''|'$'\001''/'$'\001''g'
In this example, Bash replaces $'\001' with the character that has the octal value 001 - in ASCII it's the SOH character (start of heading).
Since such characters are control/non-printable characters, it's doubtful that they will exist in the pattern. Unless, that is, you are doing something weird like modifying binary files - or Unicode files without the proper locale settings.
Another way to do this is to use Shell Parameter Substitution.
${parameter/pattern/replace} # substitute replace for pattern once
or
${parameter//pattern/replace} # substitute replace for pattern everywhere
Here is a quite complex example that is difficult with sed:
$ parameter="Common sed delimiters: [sed-del]"
$ pattern="\[sed-del\]"
$ replace="[/_%:\\#]"
$ echo "${parameter//$pattern/replace}"
result is:
Common sed delimiters: [/_%:\#]
However: This only work with bash parameters and not files where sed excel.
There is no such option for multi-character expression delimiters in sed, but I doubt
you need that. The delimiter character should not occur in the pattern, but if it appears in the string being processed, it's not a problem. And unless you're doing something extremely weird, there will always be some character that doesn't appear in your search pattern that can serve as a delimiter.
You need the nested delimiter facility that Perl offers. That allows to use stuff like matching, substituting, and transliterating without worrying about the delimiter being included in your contents. Since perl is a superset of sed, you should be able to use it for whatever you’re used sed for.
Consider this:
$ perl -nle 'print if /something/' inputs
Now if your something contains a slash, you have a problem. The way to fix this is to change delimiter, preferably to a bracketing one. So for example, you could having anything you like in the $WHATEVER shell variable (provided the backets are balanced), which gets interpolated by the shell before Perl is even called here:
$ perl -nle "print if m($WHATEVER)" /usr/share/dict/words
That works even if you have correctly nested parens in $WHATEVER. The four bracketing pairs which correctly nest like this in Perl are < >, ( ), [ ], and { }. They allow arbitrary contents that include the delimiter if that delimiter is balanced.
If it is not balanced, then do not use a delimiter at all. If the pattern is in a Perl variable, you don’t need to use the match operator provided you use the =~ operator, so:
$whatever = "some arbitrary string ( / # [ etc";
if ($line =~ $whatever) { ... }
With the help of Jim Lewis, I finally did a test before using sed :
if [ `echo $1 | grep '|'` ]; then
grep ".*$1.*:" $DB_FILE | sed "s#^.*$1*.*\(:\)## "
else
grep ".*$1.*:" $DB_FILE | sed "s|^.*$1*.*\(:\)|| "
fi
Thanks for help
Wow. I totally did not know that you could use any character as a delimiter.
At least half the time I use the sed and BREs its on paths, code snippets, junk characters, things like that. I end up with a bunch of horribly unreadable escapes which I'm not even sure won't die on some combination I didn't think of. But if you can exclude just some character class (or just one character even)
echo '#01Y $#1+!' | sed -e 'sa$#1+ashita' -e 'su#01YuHolyug'
> > > Holy shit!
That's so much easier.
Escaping the delimiter inline for BASH to parse is cumbersome and difficult to read (although the delimiter does need escaping for sed's benefit when it's first used, per-expression).
To pull together thkala's answer and user4401178's comment:
DELIM=$(echo -en "\001");
sed -n "\\${DELIM}${STARTING_SEARCH_TERM}${DELIM},\\${DELIM}${ENDING_SEARCH_TERM}${DELIM}p" "${FILE}"
This example returns all results starting from ${STARTING_SEARCH_TERM} until ${ENDING_SEARCH_TERM} that don't match the SOH (start of heading) character with ASCII code 001.
There's no universal separator, but it can be escaped by a backslash for sed to not treat it like separator (at least unless you choose a backslash character as separator).
Depending on the actual application, it might be handy to just escape those characters in both pattern and replacement.
If you're in a bash environment, you can use bash substitution to escape sed separator, like this:
safe_replace () {
sed "s/${1//\//\\\/}/${2//\//\\\/}/g"
}
It's pretty self-explanatory, except for the bizarre part.
Explanation to that:
${1//\//\\\/}
${ - bash expansion starts
1 - first positional argument - the pattern
// - bash pattern substitution pattern separator "replace-all" variant
\/ - literal slash
/ - bash pattern substitution replacement separator
\\ - literal backslash
\/ - literal slash
} - bash expansion ends
example use:
$ input="ka/pus/ta"
$ pattern="/pus/"
$ replacement="/re/"
$ safe_replace "$pattern" "$replacement" <<< "$input"
ka/re/ta

How can I have a newline in a string in sh?

This
STR="Hello\nWorld"
echo $STR
produces as output
Hello\nWorld
instead of
Hello
World
What should I do to have a newline in a string?
Note: This question is not about echo.
I'm aware of echo -e, but I'm looking for a solution that allows passing a string (which includes a newline) as an argument to other commands that do not have a similar option to interpret \n's as newlines.
If you're using Bash, you can use backslash-escapes inside of a specially-quoted $'string'. For example, adding \n:
STR=$'Hello\nWorld'
echo "$STR" # quotes are required here!
Prints:
Hello
World
If you're using pretty much any other shell, just insert the newline as-is in the string:
STR='Hello
World'
Bash recognizes a number of other backslash escape sequences in the $'' string. Here is an excerpt from the Bash manual page:
Words of the form $'string' are treated specially. The word expands to
string, with backslash-escaped characters replaced as specified by the
ANSI C standard. Backslash escape sequences, if present, are decoded
as follows:
\a alert (bell)
\b backspace
\e
\E an escape character
\f form feed
\n new line
\r carriage return
\t horizontal tab
\v vertical tab
\\ backslash
\' single quote
\" double quote
\nnn the eight-bit character whose value is the octal value
nnn (one to three digits)
\xHH the eight-bit character whose value is the hexadecimal
value HH (one or two hex digits)
\cx a control-x character
The expanded result is single-quoted, as if the dollar sign had not
been present.
A double-quoted string preceded by a dollar sign ($"string") will cause
the string to be translated according to the current locale. If the
current locale is C or POSIX, the dollar sign is ignored. If the
string is translated and replaced, the replacement is double-quoted.
Echo is so nineties and so fraught with perils that its use should result in core dumps no less than 4GB. Seriously, echo's problems were the reason why the Unix Standardization process finally invented the printf utility, doing away with all the problems.
So to get a newline in a string, there are two ways:
# 1) Literal newline in an assignment.
FOO="hello
world"
# 2) Command substitution.
BAR=$(printf "hello\nworld\n") # Alternative; note: final newline is deleted
printf '<%s>\n' "$FOO"
printf '<%s>\n' "$BAR"
There! No SYSV vs BSD echo madness, everything gets neatly printed and fully portable support for C escape sequences. Everybody please use printf now for all your output needs and never look back.
What I did based on the other answers was
NEWLINE=$'\n'
my_var="__between eggs and bacon__"
echo "spam${NEWLINE}eggs${my_var}bacon${NEWLINE}knight"
# which outputs:
spam
eggs__between eggs and bacon__bacon
knight
I find the -e flag elegant and straight forward
bash$ STR="Hello\nWorld"
bash$ echo -e $STR
Hello
World
If the string is the output of another command, I just use quotes
indexes_diff=$(git diff index.yaml)
echo "$indexes_diff"
The problem isn't with the shell. The problem is actually with the echo command itself, and the lack of double quotes around the variable interpolation. You can try using echo -e but that isn't supported on all platforms, and one of the reasons printf is now recommended for portability.
You can also try and insert the newline directly into your shell script (if a script is what you're writing) so it looks like...
#!/bin/sh
echo "Hello
World"
#EOF
or equivalently
#!/bin/sh
string="Hello
World"
echo "$string" # note double quotes!
The only simple alternative is to actually type a new line in the variable:
$ STR='new
line'
$ printf '%s' "$STR"
new
line
Yes, that means writing Enter where needed in the code.
There are several equivalents to a new line character.
\n ### A common way to represent a new line character.
\012 ### Octal value of a new line character.
\x0A ### Hexadecimal value of a new line character.
But all those require "an interpretation" by some tool (POSIX printf):
echo -e "new\nline" ### on POSIX echo, `-e` is not required.
printf 'new\nline' ### Understood by POSIX printf.
printf 'new\012line' ### Valid in POSIX printf.
printf 'new\x0Aline'
printf '%b' 'new\0012line' ### Valid in POSIX printf.
And therefore, the tool is required to build a string with a new-line:
$ STR="$(printf 'new\nline')"
$ printf '%s' "$STR"
new
line
In some shells, the sequence $' is a special shell expansion.
Known to work in ksh93, bash and zsh:
$ STR=$'new\nline'
Of course, more complex solutions are also possible:
$ echo '6e65770a6c696e650a' | xxd -p -r
new
line
Or
$ echo "new line" | sed 's/ \+/\n/g'
new
line
A $ right before single quotation marks '...\n...' as follows, however double quotation marks doesn't work.
$ echo $'Hello\nWorld'
Hello
World
$ echo $"Hello\nWorld"
Hello\nWorld
Disclaimer: I first wrote this and then stumbled upon this question. I thought this solution wasn't yet posted, and saw that tlwhitec did post a similar answer. Still I'm posting this because I hope it's a useful and thorough explanation.
Short answer:
This seems quite a portable solution, as it works on quite some shells (see comment).
This way you can get a real newline into a variable.
The benefit of this solution is that you don't have to use newlines in your source code, so you can indent
your code any way you want, and the solution still works. This makes it robust. It's also portable.
# Robust way to put a real newline in a variable (bash, dash, ksh, zsh; indentation-resistant).
nl="$(printf '\nq')"
nl=${nl%q}
Longer answer:
Explanation of the above solution:
The newline would normally be lost due to command substitution, but to prevent that, we add a 'q' and remove it afterwards. (The reason for the double quotes is explained further below.)
We can prove that the variable contains an actual newline character (0x0A):
printf '%s' "$nl" | hexdump -C
00000000 0a |.|
00000001
(Note that the '%s' was needed, otherwise printf will translate a literal '\n' string into an actual 0x0A character, meaning we would prove nothing.)
Of course, instead of the solution proposed in this answer, one could use this as well (but...):
nl='
'
... but that's less robust and can be easily damaged by accidentally indenting the code, or by forgetting to outdent it afterwards, which makes it inconvenient to use in (indented) functions, whereas the earlier solution is robust.
Now, as for the double quotes:
The reason for the double quotes " surrounding the command substitution as in nl="$(printf '\nq')" is that you can then even prefix the variable assignment with the local keyword or builtin (such as in functions), and it will still work on all shells, whereas otherwise the dash shell would have trouble, in the sense that dash would otherwise lose the 'q' and you'd end up with an empty 'nl' variable (again, due to command substitution).
That issue is better illustrated with another example:
dash_trouble_example() {
e=$(echo hello world) # Not using 'local'.
echo "$e" # Fine. Outputs 'hello world' in all shells.
local e=$(echo hello world) # But now, when using 'local' without double quotes ...:
echo "$e" # ... oops, outputs just 'hello' in dash,
# ... but 'hello world' in bash and zsh.
local f="$(echo hello world)" # Finally, using 'local' and surrounding with double quotes.
echo "$f" # Solved. Outputs 'hello world' in dash, zsh, and bash.
# So back to our newline example, if we want to use 'local', we need
# double quotes to surround the command substitution:
# (If we didn't use double quotes here, then in dash the 'nl' variable
# would be empty.)
local nl="$(printf '\nq')"
nl=${nl%q}
}
Practical example of the above solution:
# Parsing lines in a for loop by setting IFS to a real newline character:
nl="$(printf '\nq')"
nl=${nl%q}
IFS=$nl
for i in $(printf '%b' 'this is line 1\nthis is line 2'); do
echo "i=$i"
done
# Desired output:
# i=this is line 1
# i=this is line 2
# Exercise:
# Try running this example without the IFS=$nl assignment, and predict the outcome.
I'm no bash expert, but this one worked for me:
STR1="Hello"
STR2="World"
NEWSTR=$(cat << EOF
$STR1
$STR2
EOF
)
echo "$NEWSTR"
I found this easier to formatting the texts.
Those picky ones that need just the newline and despise the multiline code that breaks indentation, could do:
IFS="$(printf '\nx')"
IFS="${IFS%x}"
Bash (and likely other shells) gobble all the trailing newlines after command substitution, so you need to end the printf string with a non-newline character and delete it afterwards. This can also easily become a oneliner.
IFS="$(printf '\nx')" IFS="${IFS%x}"
I know this is two actions instead of one, but my indentation and portability OCD is at peace now :) I originally developed this to be able to split newline-only separated output and I ended up using a modification that uses \r as the terminating character. That makes the newline splitting work even for the dos output ending with \r\n.
IFS="$(printf '\n\r')"
On my system (Ubuntu 17.10) your example just works as desired, both when typed from the command line (into sh) and when executed as a sh script:
[bash]§ sh
$ STR="Hello\nWorld"
$ echo $STR
Hello
World
$ exit
[bash]§ echo "STR=\"Hello\nWorld\"
> echo \$STR" > test-str.sh
[bash]§ cat test-str.sh
STR="Hello\nWorld"
echo $STR
[bash]§ sh test-str.sh
Hello
World
I guess this answers your question: it just works. (I have not tried to figure out details such as at what moment exactly the substitution of the newline character for \n happens in sh).
However, i noticed that this same script would behave differently when executed with bash and would print out Hello\nWorld instead:
[bash]§ bash test-str.sh
Hello\nWorld
I've managed to get the desired output with bash as follows:
[bash]§ STR="Hello
> World"
[bash]§ echo "$STR"
Note the double quotes around $STR. This behaves identically if saved and run as a bash script.
The following also gives the desired output:
[bash]§ echo "Hello
> World"
I wasn't really happy with any of the options here. This is what worked for me.
str=$(printf "%s" "first line")
str=$(printf "$str\n%s" "another line")
str=$(printf "$str\n%s" "and another line")
This isn't ideal, but I had written a lot of code and defined strings in a way similar to the method used in the question. The accepted solution required me to refactor a lot of the code so instead, I replaced every \n with "$'\n'" and this worked for me.