why `echo HTTPS_PROXY=$HTTPS_PROXY` print an empty line when variable not set? - fish

shouldn't it be printing out HTTPS_PROXY= instead? (when $HTTPS_PROXY is not set)
I know I can work around using
echo HTTPS_PROXY=(echo $HTTPS_PROXY) or echo HTTPS_PROXY="$HTTPS_PROXY" , but I want to know why I need a work around in this case.

In fish, all variables are lists. When you concatenate a string and a variable, what it does is combine every list element with the string.
So
set bar 1 2 3
echo foo$bar
prints "foo1 foo2 foo3".
Now, when you have an undefined variable (or an empty one, set like set bar without values), this combines nothing with the string, which ends up eliminating it.
You can think of it like any variable expansion being a brace expansion - echo foo{1,2,3} is the same as echo foo$bar with bar set like above.
In many cases, that is exactly what you want. Imagine $bar being a list of directories. To go over all files in them you could use
for file in $bar/*
and if $bar was empty (there was no directory), the entire loop would be skipped instead of e.g. showing all files in "/".
The obvious solution is to quote the variable if you want to supress this. Quoting turns the variable into always exactly one argument, even if it's empty or has multiple elements, so
echo foo"$bar"
prints "foo1 2 3" (as one argument).
This is documented at https://fishshell.com/docs/current/#combining-lists-cartesian-product.

Related

PowerShell string format different behaviour within function

While using powershell I struggle to build up a filename from two variables. When I originally creaded the powershell script, it was working fine. Now I have tried to move some repeatable steps into a function, but the string behaviour is different.
MWE:
$topa = "ABC"
$topb = "XYZ"
function Test-Fun{
param(
$a,
$b
)
echo "$($a)H$($b).csv"
}
echo "$($topa)H$($topb).csv"
Test-Fun($topa, $topb)
The output on my system is
ABCHXYZ.csv
ABC XYZH.csv
Originally, I wanted to use an underscore instead of H and thought that is causing issues, but its not. What did I miss or rather what is the difference between string expansion within a function and outside of it?
You are calling Test-Func wrong. The comma after $topa will create an array, so you basically pass []"ABC", "XYZ" as an array to $a. In that case $b is empty!
You can easily fix this by removing the comma (also the parentheses are not necessary):
Test-Fun $topa $topb

TXR: How to combine all lines where the following line begins with a tab?

I am trying to parse the text output of a shell command using txr.
The text output uses a tab indented line following it to continue the current line (not literal \t characters as I show below). Note that on other variable assignment lines (that don't represent extended length values), there are leading spaces in the input.
Variable Group: 1
variable = the value of the variable
long_variable = the value of the long variable
\tspans across multiple lines
really_long_variable = this variable extends
\tacross more than two lines, but it
\tis unclear how many lines it will end up extending
\tacross ahead of time
Variable Group: 2
variable = the value of the variable in group 2
long_variable = this variable might not be that long
really_long_variable = neither might this one!
How might I capture these using the txr pattern language? I know about the #(freeform) directive and it's optional numeric argument to treat the next n lines as one big line. Thus, it seems to me the right approach would be something like:
#(collect)
Variable Group: #i
variable = #value
#(freeform 2)
long_variable = #long_value
#(set long_value #(regsub #/[\t ]+/ "" long_value))
#(freeform (count-next-lines-starting-with-tab))
really_long_variable = #really_long_value
#(set really_long_value #(regsub #/[\t ]+/ "" really_long_value))
#(end)
However, it's not clear to me how I might write the count-next-lines-starting-with-tab procedure with TXR lisp. On the other hand, maybe there is another better way I could approach this problem. Could you provide any suggestions?
Thanks in advance!
Let's apply the KISS principle; we don't need to bring in #(freeform). Instead we can separately capture the main line and the continuation lines for the (potentially) multi-line variables. Then, intelligently combine them with #(merge):
#(collect)
Variable Group: #i
variable = #value
long_variable = #l_head
# (collect :gap 0 :vars (l_cont))
#l_cont
# (end)
really_long_variable = #rl_head
# (collect :gap 0 :vars (rl_cont))
#rl_cont
# (end)
# (merge long_variable l_head l_cont)
# (merge really_long_variable rl_head rl_cont)
#(end)
Note that the big indentations in the above are supposed to be literal tabs. Instead of literal tabs, we can encode tabs using #\t.
Test run on the real data with \t replaced by tabs:
$ txr -Bl new.txr data
(i "1" "2")
(value "the value of the variable" "the value of the variable in group 2")
(l_head "the value of the long variable" "this variable might not be that long")(l_cont ("spans across multiple lines") nil)
(rl_head "this variable extends" "neither might this one!")
(rl_cont ("across more than two lines, but it" "is unclear how many lines it will end up extending"
"across ahead of time") nil)
(long_variable ("the value of the long variable" "spans across multiple lines")
("this variable might not be that long"))
(really_long_variable ("this variable extends" "across more than two lines, but it"
"is unclear how many lines it will end up extending" "across ahead of time")
("neither might this one!"))
We use a strict collect with :vars for the continuation lines, so that the variable is bound (to nil) even if nothing is collected. :gap 0 prevents these inner collects from scanning across lines that don't start with tabs: another strictness measure.
#(merge) has "special" semantics for combining lists of strings that haver different nesting levels; it's perfect for assembling data from different levels of collection and is basically tailor made for this kind of thing. This problem is very similar to extracting HTTP, Usenet or e-mail headers, which can have continuation lines.
On the topic of how to write a Lisp function to look ahead in the data, the most important aspect is how to get a handle on the data at the current position. The TXR pattern matching works by backtracking over a lazy list of strings (lines/records). We can use the #(data) directive to capture the list pointer at the given input position. Then we can just treat that as a list:
#(data here)
#(bind tab-start-lines #(length (take-while (f^ #/\t/) here))
Now tab-start-lines has a count of how many lines in the input start with tabs. However, take-while has a termination condition bug, unfortunately; if the following data consists of nothing but one or more tab lines, it misbehaves.⚠ Until TXR 166 is released, this requires a little workaround: (take-while [iff stringp (f^ #/\t/)] here).

Why are ##, #!, #, etc. not interpolated in strings?

First, please note that I ask this question out of curiosity, and I'm aware that using variable names like ## is probably not a good idea.
When using doubles quotes (or qq operator), scalars and arrays are interpolated :
$v = 5;
say "$v"; # prints: 5
$# = 6;
say "$#"; # prints: 6
#a = (1,2);
say "#a"; # prints: 1 2
Yet, with array names of the form #+special char like ##, #!, #,, #%, #; etc, the array isn't interpolated :
#; = (1,2);
say "#;"; # prints nothing
say #; ; # prints: 1 2
So here is my question : does anyone knows why such arrays aren't interpolated? Is it documented anywhere?
I couldn't find any information or documentation about that. There are too many articles/posts on google (or SO) about the basics of interpolation, so maybe the answer was just hidden in one of them, or at the 10th page of results..
If you wonder why I could need variable names like those :
The -n (and -p for that matter) flag adds a semicolon ; at the end of the code (I'm not sure it works on every version of perl though). So I can make this program perl -nE 'push#a,1;say"#a"}{say#a' shorter by doing instead perl -nE 'push#;,1;say"#;"}{say#', because that last ; convert say# to say#;. Well, actually I can't do that because #; isn't interpolated in double quotes. It won't be useful every day of course, but in some golfing challenges, why not!
It can be useful to obfuscate some code. (whether obfuscation is useful or not is another debate!)
Unfortunately I can't tell you why, but this restriction comes from code in toke.c that goes back to perl 5.000 (1994!). My best guess is that it's because Perl doesn't use any built-in array punctuation variables (except for #- and #+, added in 5.6 (2000)).
The code in S_scan_const only interprets # as the start of an array if the following character is
a word character (e.g. #x, #_, #1), or
a : (e.g. #::foo), or
a ' (e.g. #'foo (this is the old syntax for ::)), or
a { (e.g. #{foo}), or
a $ (e.g. #$foo), or
a + or - (the arrays #+ and #-), but not in regexes.
As you can see, the only punctuation arrays that are supported are #- and #+, and even then not inside a regex. Initially no punctuation arrays were supported; #- and #+ were special-cased in 2000. (The exception in regex patterns was added to make /[\c#-\c_]/ work; it used to interpolate #- first.)
There is a workaround: Because #{ is treated as the start of an array variable, the syntax "#{;}" works (but that doesn't help your golf code because it makes the code longer).
Perl's documentation says that the result is "not strictly predictable".
The following, from perldoc perlop (Perl 5.22.1), refers to interpolation of scalars. I presume it applies equally to arrays.
Note also that the interpolation code needs to make a decision on
where the interpolated scalar ends. For instance, whether
"a $x -> {c}" really means:
"a " . $x . " -> {c}";
or:
"a " . $x -> {c};
Most of the time, the longest possible text that does not include
spaces between components and which contains matching braces or
brackets. because the outcome may be determined by voting based on
heuristic estimators, the result is not strictly predictable.
Fortunately, it's usually correct for ambiguous cases.
Some things are just because "Larry coded it that way". Or as I used to say in class, "It works the way you think, provided you think like Larry thinks", sometimes adding "and it's my job to teach you how Larry thinks."

Passing a variable to a command in a script

I've been searching all over the place and since I'm taking my first steps in PERL this might be one of he dumbest questions but here it goes.
So I'm creating a script to manage my windows and later bind it to keyboard shortcuts, so I I'm trying to run a command and passing some variables:
my $command = `wmctrl -r :ACTIVE: -e 0,0,0,$monitors->{1}->{'width'}/2,$monitors->{1}->{'height'}`;
But I get an error saying I'm not passing the right parameters to the command, but if I do this, everything works great:
my $test = $monitors->{1}->{'width'}/2;
my $command = `wmctrl -r :ACTIVE: -e 0,0,0,$test,$monitors->{1}->{'height'}`;
So do I really have to do this? assign it first to a variable and then pass it, or there's a more elegant way of doing it?
The backticks operator (or the qx{}) accepts A string which is (possibly) interpolated. So accepts string and not expression like $var/2.
Thats mean than the $variables ($var->{1}->{some} too) are expanded but not the arithmetic expressions.
Therefore your 2 step variant works, but not the first.
If you want evaluate an expression inside the string you can use the next:
my $ans=42;
print "The #{[ $ans/2 ]} is only the half of answer\n";
prints
The 21 is only the half of answer
but it is not very readable, so better and elegant is what you're already doing - calculate the command argument in andvace, and to the qx{} or backticks only pass the calculated $variables.

How does this Perl one liner to check if a directory is empty work?

I got this strange line of code today, it tells me 'empty' or 'not empty' depending on whether the CWD has any items (other than . and ..) in it.
I want to know how it works because it makes no sense to me.
perl -le 'print+(q=not =)[2==(()=<.* *>)].empty'
The bit I am interested in is <.* *>. I don't understand how it gets the names of all the files in the directory.
It's a golfed one-liner. The -e flag means to execute the rest of the command line as the program. The -l enables automatic line-end processing.
The <.* *> portion is a glob containing two patterns to expand: .* and *.
This portion
(q=not =)
is a list containing a single value -- the string "not". The q=...= is an alternate string delimiter, apparently used because the single-quote is being used to quote the one-liner.
The [...] portion is the subscript into that list. The value of the subscript will be either 0 (the value "not ") or 1 (nothing, which prints as the empty string) depending on the result of this comparison:
2 == (()=<.* *>)
There's a lot happening here. The comparison tests whether or not the glob returned a list of exactly two items (assumed to be . and ..) but how it does that is tricky. The inner parentheses denote an empty list. Assigning to this list puts the glob in list context so that it returns all the files in the directory. (In scalar context it would behave like an iterator and return only one at a time.) The assignment itself is evaluated in scalar context (being on the right hand side of the comparison) and therefore returns the number of elements assigned.
The leading + is to prevent Perl from parsing the list as arguments to print. The trailing .empty concatenates the string "empty" to whatever came out of the list (i.e. either "not " or the empty string).
<.* *>
is a glob consisting of two patterns: .* are all file names that start with . and * corresponds to all files (this is different than the usual DOS/Windows conventions).
(()=<.* *>)
evaluates the glob in list context, returning all the file names that match.
Then, the comparison with 2 puts it into scalar context so 2 is compared to the number of files returned. If that number is 2, then the only directory entries are . and .., period. ;-)
<.* *> means (glob(".*"), glob("*")). glob expands file patterns the same way the shell does.
I find that the B::Deparse module helps quite a bit in deciphering some stuff that throws off most programmers' eyes, such as the q=...= construct:
$ perl -MO=Deparse,-p,-q,-sC 2>/dev/null << EOF
> print+(q=not =)[2==(()=<.* *>)].empty
> EOF
use File::Glob ();
print((('not ')[(2 == (() = glob('.* *')))] . 'empty'));
Of course, this doesn't instantly produce "readable" code, but it surely converts some of the stumbling blocks.
The documentation for that feature is here. (Scroll near the end of the section)