Is it possible to use an environment variable with wildcards in a query?
Given:
LevelA:
levelB:
- sometexthere
- other.value.here
The following query:
yq eval '.LevelA.levelB.[] | select(. == "*text*")' $file
returns: sometexthere.
An environment variable can also be used:
A="sometexthere" yq eval '.LevelA.levelB.[] | select(. == env(A) )' $file
to return the same value: sometexthere. However, this is a little pointless since the output is identical to the input variable value.
If wildcards are combined with the environment variable (to match a partial string), the command returns nothing:
A=text yq eval '.LevelA.levelB.[] | select(. == "*env(A)*")' $file
Is there another way possible to search with yq for a partial string using an environment variable?
You do not need the env() operator to achieve this. Instead, use single quote to concatenate shell variables in your query.
A="text"
yq eval '.LevelA.levelB.[] | select(. == "*'$A'*" )' file.yml
output:
sometexthere
With this technique, you can leverage bash parameter-expansions.
I have added text with space to the levelB to demonstrate.
# file.yml
LevelA:
levelB:
- sometexthere
- other.value.here
- text with space
Given the variable A="without space", use substitution ${A/out/} to remove the first occurrence of the string "out". The select operator will now search for the wildcard
string "*with space*".
A="without space"
yq eval '.LevelA.levelB.[] | select(. == "*'"${A/out/}"'*" )' file.yml
# | | || | || | |
# | | || | || | └> (a.1) end yq query
# | | || | || └> (b) end string
# | | || | |└> (a.2) open yq query (end concat)
# | | || | └> (c) bash double quote
# | | || └> remove the first occurent of "out"
# | | |└> (c) bash double quote
# | | └> (a.2) close yq query (begin concat)
# | └> (b) begin string
# └> (a.1) start yq query
output:
text with space
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
This regular expression will match exactly one / and one . in a line. But why is it matching? Can anyone explain to me each characters role in this regular expression clearly?
if ($fp =~ m{^[^/]*/[^/]*$} and $fp =~ m{^[^.]*.[^.]$})
{
print $fp;
}
if($fp =~ m{^[^/]*/[^/]*$} and $fp =~ m{^[^.]*.[^.]$}) {
^\ / ^^\ / ^^
| | || | ||
------------- | || | ||
begin line | || | ||
--------------- || | ||
any char but / || | ||
------------------| | ||
zero or more | | ||
------------------ | ||
one / | ||
--------------------- ||
any char but / ||
------------------------|
zero or more |
------------------------|
end of line
So it search
begin or line (^),
followed by zero or more occurrence (*) of any char but / ([^/])
followed by a /
followed by zero or more occurrence (*) of any char but / ([^/])
followed by end of line ($)
The "." search is similar and the 'if' triggers if both are true.
Note that [...] searches a char in a range. For instance [abc] searches either a 'a', a 'b', or a 'c'. If first char is '^' test is reversed and [^/] is any char, but '/'.
While the previous answers are correct in explaining the regex, they do fail to point out that the 2nd regex is actually broken. As written it will match
start of line
followed by zero-or-more non-. (dot) characters
followed by ANY character, except \n
followed by ONE non-. (dot) character
end of line
Proof:
$ echo "This should NOT match" | perl -ne 'print if m{^[^.]*.[^.]$}'
This should NOT match <--- INCORRECT MATCH
$ echo "This should. match" | perl -ne 'print if m{^[^.]*.[^.]$}'
<--- INCORRECT MIS-MATCH
$ echo "This should match.!" | perl -ne 'print if m{^[^.]*.[^.]$}'
This should match.! <-- CORRECT (by luck)
$ echo "This should match." | perl -ne 'print if m{^[^.]*.[^.]$}'
This should match. <-- CORRECT
Correct would be
the . needs to be escaped (\.)
the 2nd character class needs a *
$ echo "This should NOT match" | perl -ne 'print if m{^[^.]*\.[^.]*$}'
<-- CORRECT
$ echo "This should. match" | perl -ne 'print if m{^[^.]*\.[^.]*$}'
This should. match <-- CORRECT
$ echo "This should match.!" | perl -ne 'print if m{^[^.]*\.[^.]*$}'
This should match.! <-- CORRECT
$ echo "This should match." | perl -ne 'print if m{^[^.]*\.[^.]*$}'
This should match. <-- CORRECT
The first expresion: m matches { opens expresion ^ first of line, [^/]* any character not '/' 0 or more times, '/' literal '/', again [^/]*, $ end of line, } closes the expresion.
I have a file containing:
L1
L2
L3
.
.
.
L512
I want to change its content to :
L1 | L2 | L3 | ... | L512
It seems so easy , but its now 1 hour Im sitting and trying to make it, I tried to do it by sed, but didn't get what I want. It seems that sed just inserts empty lines between the content, any suggestion please?
With sed this requires to read the whole input into a buffer and afterwards replace all newlines by |, like this:
sed ':a;N;$!ba;s/\n/ | /g' input.txt
Part 1 - buffering input
:a defines a label called 'a'
N gets the next line from input and appends it to the pattern buffer
$!ba jumps to a unless the end of input is reached
Part 2 - replacing newlines by |
s/\n/|/ execute the substitute command on the pattern buffern
As you can see, this is very inefficient since it requires to:
read the complete input into memory
operate three times on the input: 1. reading, 2. substituting, 3. printing
Therefore I would suggest to use awk which can do it in one loop:
awk 'NR==1{printf $0;next}{printf " | "$0}END{print ""}' input.txt
Here is one sed
sed ':a;N;s/\n/ | /g;ta' file
L1 | L2 | L3 | ... | L512
And one awk
awk '{printf("%s%s",sep,$0);sep=" | "} END {print ""}' file
L1 | L2 | L3 | ... | L512
perl -pe 's/\n/ |/g unless(eof)' file
if space between | is not mandatory
tr "\n" '|' YourFile
Several options, including those mentioned here:
paste -sd'|' file
sed ':a;N;s/\n/ | /g;ta' file
sed ':a;N;$!ba;s/\n/ | /g' file
perl -0pe 's/\n/ | /g;s/ \| $/\n/' file
perl -0nE 'say join " | ", split /\n/' file
perl -E 'chomp(#x=<>); say join " | ", #x' file
mapfile -t ary < file; (IFS="|"; echo "${ary[*]}")
awk '{printf("%s%s",sep,$0);sep=" | "} END {print ""}' file
I'm trying to fetch a version number from an xml-file on a remote machine. I do this via the Net::SSH::Perl cmd function. It looks something like this:
my ($version, $err, $exit) = $ssh->cmd("head -11 /some/path/to/the/file.xml | tail -1 | sed 's/<[^>]\+>//g' | xargs");
print Dumper $version;
What I'm trying to achieve with that is, to extract the number out of an XML-tag <version>2.6</version>
It works perfectly fine, when I use the cmd on a ssh-shell via PuTTy
user#remotemachine:~>head -11 /some/path/to/the/file.xml | tail -1 | sed 's/<[^>]\+>//g' | xargs
2.6
user#remotemachine:~>
However, Perl prints
$VAR1 = '<version>2.6</version>
';
Any ideas, why it's not working?
Edit: Obviously it has nothing to do with the Net::SSH::Perl-module, since
perl -e "system(\"head -11 /some/path/to/the/file.xml | tail -1 | sed 's/<[^>]\+>//g' | xargs\");"
Also prints
<version>2.6</version>
You are using double quotes. In double quotes, \ is special, so only + instead of \+ is passed to sed.
You can use the q() operator to avoid backslashing the backslash:
$ssh->cmd(q(head -11 /some/path/to/the/file.xml | tail -1 | sed 's/<[^>]\+>//g' | xargs));
Input of the text file
A | 1 | def | 1432
A | 1 | ffr | 1234
A | 1 | dfs | 3241
A | 2 | asf | 2213
Desired Output
A | 1 | def 1432,ffr 1234,dfs 3241
A | 2 | asf 2213
Merging values related to 2nd column in a single row
And here is a Perl attempt:
perl -F'\s+\|\s+' -alne '
$a{$F[1]} .= "$F[2] $F[3],";
END {
$_ = "A | $_ | $a{$_}", s/,$//, print for sort keys %a;
}' FILE
Your problem is not well specified, but here's a step towards a solution:
awk -F\| '{ a[$1 "|" $2] = a[$1 "|" $2 ] "," $3 $4 }
END { for( x in a ) print x a[x]}' input |
sed 's/,/|/' # Trim leading comma
This will incorrectly delete the first comma in either of the first 2 columns instead of the incorrectly inserted leading comma in the 3rd column of output, and sorts on both of the first 2 columns rather than just the 2nd. Also, the order of the output will be different than the input. There are probably other issues, but this may help.
awk '
BEGIN { FS = " \\| "; OFS = SUBSEP = " | " }
{
val[$1,$2] = val[$1,$2] sep[$1,$2] $3 " " $4
sep[$1,$2] = ","
}
END { for (key in val) print key, val[key] }
'
This will likely not preserve the order of the input. Also, it uses both the 1st and 2nd columns as the key, but as you say the 1st column does not change it is irrelevant.
I have some files in a directory as below (not necessarily sorted):
A_10
A_20
A_30
B_10
B_30
C_10
C_20
D_20
D_30
E_10
E_20
E_30
10, 20 and 30 are the sequence numbers of A,B,C,D,E respectively.
I want to select only those files with minimum sequence of all A,B,C,D,E
the output should be :
A_10
B_10
C_10
D_20
E_10
could anybody help me?
perl -le '
print join $/,
grep !$_{( split "_" )[0]}++,
sort glob "*_*"
'
or:
printf '%s\n' *_* | sort | awk -F_ '!_[$1]++'
or:
printf '%s\n' *_* | sort -t_ -uk1,1
In bash:
for x in A B C D E; do
ls -1 ${x}_* | sort | head -n1
done