Sed : Add a line at the starting of each TCL proc - sed

I have a TCL proc like this, & want to add a line after the start of the proc, the puts "puts " entered myproc" " line
proc myproc { {filename "input.txt"}
{var1 "x"}
{var2 "y"}
{var3 "z"}
{var4 ""}
{var5 "0"}
{var6 "0"}
{var7 0}
} {
puts " entered myproc"
Can you help?
& it should also work for
proc myproc2 { N val } {
puts " entered myproc"
# comment line
set ret {} for { set i 0 } { $i < $N } { incr i } { lappend ret $val }
return $ret
}

If all you want to do is get an execution trace of your code, such as a call stack dump etc, then you don't need to modify your source code at all. You can use tcl itself to do it for you.
Tcl has no reserved keywords, none at all. Not even proc is reserved. You can therefore redefine it:
rename proc _proc
# Now proc no longer exists but we have _proc instead.
# Use it to redefine "proc":
_proc proc {name arguments body} {
set body "puts \"entered $name\";$body"
_proc $name $arguments $body
}
Just do that before running any of your own code and you'll find that every proc prints out when it's being entered on each call.
This is how a lot of tcl debuggers and profilers work - using tcl to redifine itself.
From your comments it looks like you're trying to also print how deep the stack is with each call. To do that you need to add more code to each proc definition. The most straightforward way is of course something like this:
_proc proc {name arguments body} {
set preamble"set dist2top \[info level\];puts \"\$dist2top entered $name\""
set body "$preamble;$body"
_proc $name $arguments $body
}
But as you can see, writing code inside strings can quickly become unmanagable. There are several tricks you can use to make it more manageable. One of the more common is to split $body by line and use list commands to manipulate code. It should reduce at least one level of quoting hell. My favorite is to use a templating technique similar to how you'd write html templates in MVC frameworks. I usually use string map for this:
_proc proc {name arguments body} {
_proc $name $arguments [string map [list %NAME% $name %BODY% $body] {
set dist2top [info level]
puts "$dist2top entered: %NAME%"
%BODY%
}]
}
The last argument in the _proc definition is just a string but it looks like a code block which makes it easier to read. No nasty quoting hell with this technique.

Using awk you can do:
awk '/^ *proc/ {$0 = $0 "\nputs \" entered myproc\""} 1' RS= proc-file.tcl
Gives this file:
proc myproc { {filename "input.txt"}
{var1 "x"}
{var2 "y"}
{var3 "z"}
{var4 ""}
{var5 "0"}
{var6 "0"}
{var7 0}
} {
puts " entered myproc"

Related

Import from strings from a text file and only expand variables [duplicate]

I'm trying to write a function that will print a user-supplied greeting addressed to a user-supplied name. I want to use expanding strings the way I can in this code block:
$Name = "World"
$Greeting = "Hello, $Name!"
$Greeting
Which successfully prints Hello, World!. However, when I try to pass these strings as parameters to a function like so,
function HelloWorld
{
Param ($Greeting, $Name)
$Greeting
}
HelloWorld("Hello, $Name!", "World")
I get the output
Hello, !
World
Upon investigation, Powershell seems to be ignoring $Name in "Hello, $Name!" completely, as running
HelloWorld("Hello, !", "World")
Produces output identical to above. Additionally, it doesn't seem to regard "World" as the value of $Name, since running
function HelloWorld
{
Param ($Greeting, $Name)
$Name
}
HelloWorld("Hello, $Name!", "World")
Produces no output.
Is there a way to get the expanding string to work when passed in as a function parameter?
In order to delay string interpolation and perform it on demand, with then-current values, you must use $ExecutionContext.InvokeCommand.ExpandString()[1] on a single-quoted string that acts as a template:
function HelloWorld
{
Param ($Greeting, $Name)
$ExecutionContext.InvokeCommand.ExpandString($Greeting)
}
HelloWorld 'Hello, $Name!' 'World' # -> 'Hello, World!'
Note how 'Hello, $Name!' is single-quoted to prevent instant expansion (interpolation).
Also note how HelloWorld is called with its arguments separated with spaces, not ,, and without (...).
In PowerShell, functions are invoked like command-line executables - foo arg1 arg2 - not like C# methods - foo(arg1, arg2) - see Get-Help about_Parsing.
If you accidentally use , to separate your arguments, you'll construct an array that a function sees as a single argument.
To help you avoid accidental use of method syntax, you can use Set-StrictMode -Version 2 or higher, but note that that entails additional strictness checks.
Note that since PowerShell functions by default also see variables defined in the parent scope (all ancestral scopes), you could simply define any variables that the template references in the calling scope instead of declaring individual parameters such as $Name:
function HelloWorld
{
Param ($Greeting) # Pass the template only.
$ExecutionContext.InvokeCommand.ExpandString($Greeting)
}
$Name = 'World' # Define the variable(s) used in the template.
HelloWorld 'Hello, $Name!' # -> 'Hello, World!'
Caveat: PowerShell string interpolation supports full commands - e.g., "Today is $(Get-Date)" - so unless you fully control or trust the template string, this technique can be security risk.
Ansgar Wiechers proposes a safe alternative based on .NET string formatting via PowerShell's -f operator and indexed placeholders ({0}, {1}, ...):
Note that you can then no longer apply transformations on the arguments as part of the template string or embed commands in it in general.
function HelloWorld
{
Param ($Greeting, $Name)
$Greeting -f $Name
}
HelloWorld 'Hello, {0}!' 'World' # -> 'Hello, World!'
Pitfalls:
PowerShell's string expansion uses the invariant culture, whereas the -f operator performs culture-sensitive formatting (snippet requires PSv3+):
$prev = [cultureinfo]::CurrentCulture
# Temporarily switch to culture with "," as the decimal mark
[cultureinfo]::CurrentCulture = 'fr-FR'
# string expansion: culture-invariant: decimal mark is always "."
$v=1.2; "$v"; # -> '1.2'
# -f operator: culture-sensitive: decimal mark is now ","
'{0}' -f $v # -> '1,2'
[cultureinfo]::CurrentCulture = $prev
PowerShell's string expansion supports expanding collections (arrays) - it expands them to a space-separated list - whereas the -f operator only supports scalars (single values):
$arr = 'one', 'two'
# string expansion: array is converted to space-separated list
"$var" # -> 'one two'
# -f operator: array elements are syntactically treated as separate values
# so only the *first* element replaces {0}
'{0}' -f $var # -> 'one'
# If you use a *nested* array to force treatment as a single array-argument,
# you get a meaningless representation (.ToString() called on the array)
'{0}' -f (, $var) # -> 'System.Object[]'
[1] Surfacing the functionality of the $ExecutionContext.InvokeCommand.ExpandString() method in a more discoverable way, namely via an Expand-String cmdlet, is the subject of GitHub feature-request issue #11693.
Your issue occurs because the $Name string replacement is happening outside of the function, before the $Name variable is populated inside of the function.
You could do something like this instead:
function HelloWorld
{
Param ($Greeting, $Name)
$Greeting -replace '\$Name',$Name
}
HelloWorld -Greeting 'Hello, $Name!' -Name 'World'
By using single quotes, we send the literal greeting of Hello, $Name in and then do the replacement of this string inside the function using -Replace (we have to put a \ before the $ in the string we're replace because $ is a regex special character).

PowerShell stderr redirect to file inserts newlines

Edit: I created a PowerShell UserVoice "suggestion" for (against?) this behavior; feel free to upvote.
PowerShell (5.1.16299.98, Windows 10 Pro 10.0.16299) is inserting newlines into my stderr when I redirect to file—as if to format for the console. Let's generate error messages of arbitrary length:
class Program
{
static void Main(string[] args)
{
System.Console.Error.WriteLine(new string('x', int.Parse(args[0])));
}
}
I compiled the above to longerr.exe. Then I call it like this:
$ .\longerr.exe 60 2>error.txt
I ran the following script in a PowerShell console with window width 60:
$h = '.\longerr.exe : '.Length
$w = 60 - 1
$f = 'error.txt'
Remove-Item $f -ea Ignore
(($w-$h), ($w-$h+1), ($w), ($w+1), ($w*2-$h), ($w*2-$h+1)) |
% {
$_ >> $f
.\longerr.exe $_ 2>>$f
}
Now in a wider console I ran the following:
$ Get-Content $f | Select-String '^(?![+\t]|At line| )'
(I could have just opened the file in a text editor and trimmed lines.) Here's the output:
43
.\longerr.exe : xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
44
.\longerr.exe :
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
59
.\longerr.exe :
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
60
.\longerr.exe : xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxx
102
.\longerr.exe : xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
103
.\longerr.exe : xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
x
Why is PowerShell doing this? Can I make it stop? I'd rather not have to do something like this:
.\longerr.exe $_ 2>&1 |
% {
if ($_ -is [System.Management.Automation.ErrorRecord]) {
$_.Exception.Message | Out-File -FilePath $f -Append
}
}
First, all the opening and closing of that file can be slow (I know I can add more code and use a StreamWriter), and second, there is still a problem (bug?) with that approach, which I won't go into in this question.
For a sanity check, I ran longerr.exe 1000 2>test.txt in cmd.exe; it inserted no spurious linebreaks.
Thanks to TessellatingHeckler's comment pointing me to the question Why does PowerShell chops message on stderr?, I've been able to fix both problems in my question—the main one and the one I mentioned at the end. The key is the following:
The complete output on stderr of the executable is simply split across several objects of type System.Management.Automation.ErrorRecord. The actual splitting seems to be non deterministic (*). Moreover, the partial strings are stored inside the property Exception instead of TargetObject. Only the first ErrorRecord has a non-null TargetObject.
⋮
(*) It depends on the order of write/flush calls of the program in relation to the read calls of the Powershell. If one adds a fflush(stderr) after each fprintf() in my test program below, there will be much more ErrorRecord objects. Except the first one, which seems deterministic, some of them include 2 output lines and some of them 3.
With this, I was able to modify longerr.exe to also reproduce the bug I alluded to at the end:
class Program
{
static void Main(string[] args)
{
if (args.Length == 1)
{
System.Console.Error.WriteLine(new string('x', int.Parse(args[0])));
}
else
{
for (int i = 0; i < int.Parse(args[1]); i++)
{
System.Console.Error.WriteLine("\n");
System.Console.Error.WriteLine(new string('x', int.Parse(args[0])));
}
}
}
}
Here's the PowerShell script which works (and efficiently):
$p_out = 'success.txt'
$p_err = 'error.txt'
try
{
[Environment]::CurrentDirectory = $PWD
$append = $false
$out = [System.IO.StreamWriter]::new($p_out, $append)
$err = [System.IO.StreamWriter]::new($p_err, $append)
.\longerr.exe 2000 4 2>&1 |
% {
if ($_ -is [System.Management.Automation.ErrorRecord]) {
# https://stackoverflow.com/a/33858097/2328341
if ($_.TargetObject -ne $null) {
$err.WriteLine();
}
$err.Write($_.Exception.Message)
} else {
$out.WriteLine($_)
}
}
}
finally
{
$out.Close()
$err.Close()
}
Notes:
Without the test for TargetObject, I would eliminate the main problem I was focusing on in my question, but I would still get the "bug" I mentioned at the end, which the linked SO question addresses.
I could have used Out-File (with -NoNewline) instead of StreamWriter, but there are two problems with that:
Windows Defender was making all the file opens and closes over an order of magnitude slower than when I turned off "Real-time protection" (either globally or on the directory containing error.txt and success.txt).
Even without Defender slowing things down, StreamWriter out-performs Out-File by over an order of magnitude. For reference, I'm using a Samsung 960 EVO for storage.
The StreamWriter(string, bool) constructor writes UTF-8 with no Byte-Order Mark (BOM), while PowerShell 5.1's redirection operators > and >> use UTF-16 LE with BOM. For reference, PowerShell 6.0 defaults to UTF-8 with no BOM.
(I've included stdout for completeness in real-world situations.) Now, that's an absolutely ridiculous amount of work to get the following functionality of cmd.exe:
$ .\longerr.exe 2000 4 2>error.txt

How to run if inline on PowerShell command

Suppose I have this function;
Function SomeCommand{
param(
[string]$Var1,
[string]$Var2
)
# Do Something
}
I could set it up like this;
Function SomeCommand{
param(
[string]$Var1,
[string]$Var2
)
if ($Var2){
Do-SomeOtherCommand -SomeParam1 $Var1 -SomeParam2 $Var2
} else {
Do-SomeOtherCommand -SomeParam1 $Var1
}
This works fine if I only have one optional parameter, but if I have two it gets harry. I would like to do something like this;
Function SomeCommand{
param(
[string]$Var1,
[string]$Var2,
[string]$Var3
)
Do-SomeOtherCommand -SomeParam1 $Var1 (if($Var2){-SomeParam2 $Var2}) (if($Var3){-SomeParam3 $Var3})
}
Is there a way to accomplish this?
You are probably looking for splatting. You can build up a hashtable with the parameters you wish to pass (and their values), then specify the whole thing in one shot:
function FuncB($param1, $param2)
{
"FuncB -- param1:[$param1] param2:[$param2]"
}
function FuncA($paramA, $paramB)
{
$args = #{}
if ($paramA){ $args['param1'] = $paramA }
if ($paramB){ $args['param2'] = $paramB }
FuncB #args
}
Test
FuncA 'first' 'second'
FuncA 'OnlyFirst'
FuncA -paramB 'OnlySecond'
# results
# FuncB -- param1:[first] param2:[second]
# FuncB -- param1:[OnlyFirst] param2:[]
# FuncB -- param1:[] param2:[OnlySecond]
Semicolons. PowerShell allows you to use semicolons as line terminators.
Write-Output 1;Write-Output 2;Write-Output 3;
Personally, I think it should be mandatory.
Also note that you can build up an arbitrary expression as a simple string, then use Invoke-Expression (alias iex) to invoke it inline.
function FuncB($param1, $param2)
{
"FuncB -- param1:[$param1] param2:[$param2]"
}
function FuncA($paramA, $paramB)
{
$funcBCall = "FuncB $(if($paramA){ "-param1 '$paramA'" }) $(if($paramB){ "-param2 '$paramB'" })"
iex $funcBCall
}
This approach is very hacky and brittle, though, so I wouldn't recommend it.

Send request parameters when calling a PHP script via command line

When you run a PHP script through a browser it looks something like
http://somewebsite.com/yourscript?param1=val1&param2=val2.
I am trying to achieve the same thing via command line without having to rewrite the script to accept argv instead of $_REQUEST. Is there a way to do something like this:
php yourscript.php?param1=val1&param2=val2
such that the parameters you send show up in the $_REQUEST variable?
In case you don't want to modify running script, you can specify parameters using in -B parameter to specify code to run before the input file. But in this case you must also add -F tag to specify your input file:
php -B "\$_REQUEST = array('param1' => 'val1', 'param2' => 'val2');" -F yourscript.php
I can't take credit for this but I adopted this in my bootstrap file:
// Concatenate and parse string into $_REQUEST
if (php_sapi_name() === 'cli') {
parse_str(implode('&', array_slice($argv, 1)), $_REQUEST);
}
Upon executing a PHP file from the command line:
php yourscript.php param1=val1 param2=val2
The above will insert the keys and values into $_REQUEST for later retrieval.
No, there is no easy way to achieve that. The web server will split up the request string and pass it into the PHP interpreter, who will then store it in the $_REQUEST array.
If you run from the command line and you want to accept similar parameters, you'll have to parse them yourself. The command line has completely different syntax for passing parameters than HTTP has. You might want to look into getopt.
For a brute force approach that doesn't take user error into account, you can try this snippet:
<?php
foreach( $argv as $argument ) {
if( $argument == $argv[ 0 ] ) continue;
$pair = explode( "=", $argument );
$variableName = substr( $pair[ 0 ], 2 );
$variableValue = $pair[ 1 ];
echo $variableName . " = " . $variableValue . "\n";
// Optionally store the variable in $_REQUEST
$_REQUEST[ $variableName ] = $variableValue;
}
Use it like this:
$ php test.php --param1=val1 --param2=val2
param1 = val1
param2 = val2
I wrote a short function to handle this situation -- if command line arguments are present and the $_REQUEST array is empty (ie, when you're running a script from the command line instead of though a web interface), it looks for command line arguments in key=value pairs,
Argv2Request($argv);
print_r($_REQUEST);
function Argv2Request($argv) {
/*
When $_REQUEST is empty and $argv is defined,
interpret $argv[1]...$argv[n] as key => value pairs
and load them into the $_REQUEST array
This allows the php command line to subsitute for GET/POST values, e.g.
php script.php animal=fish color=red number=1 has_car=true has_star=false
*/
if ($argv !== NULL && sizeof($_REQUEST) == 0) {
$argv0 = array_shift($argv); // first arg is different and is not needed
foreach ($argv as $pair) {
list ($k, $v) = split("=", $pair);
$_REQUEST[$k] = $v;
}
}
}
The sample input suggested in the function's comment is:
php script.php animal=fish color=red number=1 has_car=true has_star=false
which yields the output:
Array
(
[animal] => fish
[color] => red
[number] => 1
[has_car] => true
[has_star] => false
)

Calling a local function from a dot sourced file

I have a main script that I am running. What it does is read through a directory filled with other powershell scripts, dot includes them all and runs a predefined method in each made up of the first portion of the dot delimited file name. Example:
Run master.ps1
Master.ps1 dot sources .\resource\sub.ps1
Sub.ps1 has defined a function called 'dosub'
Master.ps1 runs 'dosub' using Invoke-Expression
Also defined in sub.ps1 is the function 'saysomething'. Implemented in'dosub' is a call to 'saysomething'.
My problem is I keep getting the error:
The term 'saysomething' is not recognized as the name of a cmdlet,
function, script file, or operable program. Check the spelling of the
name, or if a path was included, verify that the path is correct and
try again.
Why can't the method 'dosub' find the method 'saysomething' which is defined in the same file?
master.ps1:
$handlersDir = "handlers"
$handlers = #(Get-ChildItem $handlersDir)
foreach ( $handler in $handlers ) {
. .\$handlersDir\$handler
$fnParts = $handler.Name.split(".")
$exp = "do" + $fnParts[0]
Invoke-Expression $exp
}
sub.ps1:
function saysomething() {
Write-Host "I'm here to say something!"
}
function dosub() {
saysomething
Write-Host "In dosub!"
}
Your code works on my system. However you can simplify it a bit:
$handlersDir = "handlers"
$handlers = #(Get-ChildItem $handlersDir)
foreach ( $handler in $handlers )
{
. .\$handlersDir\$handler
$exp = "do" + $handler.BaseName
Write-Host "Calling $exp"
& $exp
}
Note the availability of the BaseName property. You also don't need to use Invoke-Expression. You can just call the named command ysing the call (&) operator.
What you have given works as needed. You probably don't have the directories etc proper on your machine. Or you are running something else and posting a different ( working!) code here.
You can also make following corrections:
. .\$handlersDir\$handler
instead of above you can do:
. $handler.fullname
Instead the splitting of the filename you can do:
$exp = "do" + $handler.basename