There is at least this question on how to initiate actions on shell closing. The problem is it doesn't catch closing powershell window with 'x' button. Apparently it doesn't generate said event. Is there any way to capture such close and force actions upon it?
What I want is to pipe some values upon closing from session variables (hashtable of System.Diagnostisc.Process) to system variables upon closing. So that new session can access them directly.
Perhaps the expression "Every problem can be solved with another level of indirection" (see also) applies here.
May really depend on your needs, and how your script(s) work, etc. Perhaps you could start each of your scripts that need this behavior by loading some common library, which exports a function that watches for the exit of a given PowerShell process, namely your current one.
You can get this info using the $pid variable and pass that to your watcher function which launches a process to wait for the exit of your current script. This could be another script, C#, etc. See also Get-Process -Id $pid.
Without knowing what you are trying to accomplish it is hard to recommend anything more specific that might meet your needs such as logging, transcripts, user education, detection of 'dirty shutdowns' of your code, etc.
Related
Imagine I have a program writen in whatever language and compiled to run interactivelly using just command line interface. Lets imagine this one just for the sake of simplify the question:
The program first asks the user its name.
Then based on some business logic, it may ask the user age OR the user email. Only one of those.
After that it finishes with success or error.
Now image that I want to write a script in powershell that fills all that data automatically.
How can I achieve this? How can I run this program, read its questions (outputs) and then provide the correct answer (input)?
If you don't know the questions it will ask ahead of time, this would be tough.
PowerShell scripts are normally linear. Once you start the program from in PowerShell, it would wait for the program to finish before continuing. There are ways to do things in parallel, but it doesn't interact like that.
Although if you're dealing with something like a website making the first call gives a response (completing the command). You could match the response to select the proper value.
Or if the program is local and allows command line parameters, you could do that.
I'm chasing a problem in Joomla and am debugging with Eclipse for PHP. I need to get a record of the values a specific parameter has in succeeding calls to a function.
I know how to set breakpoints and inspect variables, or how to watch the variable. The problem is that the function is called dozens of times. Too many to manually copy and paste the value of the parameter in question at each breakpoint. It's just too error prone a process.
I wonder if there is any way to automate this. I.e. when Eclipse stops because the breakpoint is reached, inspect a specific variable, copy its current value, and record that in, say, a file. I can then manually continue the execution (F8) or the automated process can do that, this doesn't matter.
When a process runs in Windows, if you pass command line arguments to it, they are visible if the process is running, making the passing of plain-text sensitive data using this method a really bad idea.
How does one prevent this from happening, short of implementing a public / private key infrastructure?
Do you just run everything via plink, since it exits right away after running the command (of course you have to make sure that it actually does exit)
See related questions below to refer to what I'm talking about:
https://stackoverflow.com/questions/7494073/commandline-arguments-of-running-process-in-dos
https://stackoverflow.com/questions/53808314/why-would-one-do-this-when-storing-a-secure-string/53808379?noredirect=1#comment94523315_53808379
I'm currently doing some combustion engine analysis which has lead me to try and pass some specific heats from EES to matlab, by using EES-macros (.emf files) to generate the properties. This works great for simple tasks where the properties are just assigned to variables in the macros which is then exported and read by Matlab.
Now, I'm interested in getting the properties of products in chemical equilibrium calculations, so I need to solve coupled equations in EES. This poses a problem since you can't have unassigned stuff on the right hand side in EES-macros.
The above problem was quickly solved simply by solving the equations for the equilibrium composition in a reguler .ees-file and then exporting the results. But this has led to another problem:
Once I call my Matlab-script the procedure starts "hanging" just before the specific heats are returned. I've found that the script completes once you manually close the now-opened EES-window, but this is not viable since i need to make several hundreds of imports.
The problem doesn't occur when using EES-macros instead of files, since in these you can simply use the Quit statement in the end, but as mentioned macros are not an option for this. Does anyone know of an equivalent statement that you can use in an EES-FILE? I've also tried to shut down EES with a system-command in my script: system('taskkill /F /IM EES.EXE');. But this doesn't seem to be able to find the EES-task, although it appears in the task manager and in the taskbar (the statement is tested, it works if you open EES manually).
Any help is very much appreciated, thanks in advance!
Regards
You can use a macro file to solve the EES file and then quit the program.
Example.emf contains:
Open C:\Example.ees
Solve
Quit
And then the MATLAB system call
system('$EESPath\ees.exe C:\Example.emf');
will do the job.
You will need to leverage the $Export directive to place the results into an external file that MATLAB can then import.
I'm trying to deal with a server that works as follows:
It has a parent process
It creates a "helper" child process to handles some special tasks
It opens the child process with a pipe; and uses the pipe to issue commands to the child.
It also spawns off many other child processes (the server's main goal is to execute various commands).
I would like to be able to detect when the write to the pipe to the child process fails; and issue a special notification.
Ordinarily, I would achieve that by creating a custom $SIG{PIPE} handler in the parent process.
However, what I'm concerned with is the fact that some of the processes the parent launches to execute commands might have their own pipes open to them; and if the write to THOSE pipes fails, I'd like to simply ignore the SIGPIPE.
Q1. Is there a way for me to tell from within SIGPIPE handler, which of the open pipes threw the signal? (I know every child's PID, so PID would be fine... or if there's a way to do it via file descriptor #s?).
Q2. Could I solve the problem using local $SIG{PIPE} somehow? My assumption is that I would need to:
Set helper-process-specific local $SIG{PIPE} right before writing to that pipe
do print $HELPER_PIPE (this happens in only one subroutine)
Reset $SIG{PIPE} to DEFAULT or IGNORE
Ensure that these 3 actions are within their own block scope.
The write syscall returns the error EPIPE in the same case when a SIGPIPE is triggered, assuming that the SIGPIPE doesn't succeed in killing the process. So your best bet is to set $SIG{PIPE} = 'IGNORE' (to avoid dying from the signal), to use $fh->autoflush (to avoid PerlIO buffering, ensuring that you're notified of any I/O errors immediately), and to check the return value of print whenever you call it. If print returns false and $!{EPIPE} is set, then you've tried to write to a closed pipe. If print returns false and $!{EPIPE} isn't set, you have some other issue to deal with.
Portably you can't tell. However, you might find your OS supports the SIG_INFO information, and if you can get that up to Perl somehow, the siginfo structure contains a field that gives the FD number on SIGPIPE.