Powershell Remove-Variable cmdlet, do I need to call it at the end of each function/scriptblock? - powershell

This is a generic question, no code.
Not sure if I need to remove local variables as I thought it should be done by the Powershell Engine.
I had a script to gather info from WMI and used a lot of local variables. The output was messed up when running multiple times, but it got fixed after I clean up all local variables at the end of function/scriptblock.
Any thoughts/idea would be appreciated.

The trouble do not come from the fact that you do not remove your vars, but by at least from two beginers errors (or done by lazy developpers like me, supposing I'am a developper).
we forget to itialize our vars before using them.
we do not test every returns of our functions or CmdLets calls.
Once these two things done (code multipled by two at least) you can restart your script without cleaning anything except the processed datas.
For me scripting is most of the time done on a corner of a table even not push in a source repository.
So start scripting and ask yourself fewer questions.

Related

EES to Matlab, proces only completes when closing EES manually

I'm currently doing some combustion engine analysis which has lead me to try and pass some specific heats from EES to matlab, by using EES-macros (.emf files) to generate the properties. This works great for simple tasks where the properties are just assigned to variables in the macros which is then exported and read by Matlab.
Now, I'm interested in getting the properties of products in chemical equilibrium calculations, so I need to solve coupled equations in EES. This poses a problem since you can't have unassigned stuff on the right hand side in EES-macros.
The above problem was quickly solved simply by solving the equations for the equilibrium composition in a reguler .ees-file and then exporting the results. But this has led to another problem:
Once I call my Matlab-script the procedure starts "hanging" just before the specific heats are returned. I've found that the script completes once you manually close the now-opened EES-window, but this is not viable since i need to make several hundreds of imports.
The problem doesn't occur when using EES-macros instead of files, since in these you can simply use the Quit statement in the end, but as mentioned macros are not an option for this. Does anyone know of an equivalent statement that you can use in an EES-FILE? I've also tried to shut down EES with a system-command in my script: system('taskkill /F /IM EES.EXE');. But this doesn't seem to be able to find the EES-task, although it appears in the task manager and in the taskbar (the statement is tested, it works if you open EES manually).
Any help is very much appreciated, thanks in advance!
Regards
You can use a macro file to solve the EES file and then quit the program.
Example.emf contains:
Open C:\Example.ees
Solve
Quit
And then the MATLAB system call
system('$EESPath\ees.exe C:\Example.emf');
will do the job.
You will need to leverage the $Export directive to place the results into an external file that MATLAB can then import.

In Powershell, is it a good practice to write multiple functions in a single script file?

I was told to write multiple actions using powershell script. Actions such as Apppool creation, SQL updation, File editing and etc.
I am going to write such a bulk thing in script first time.
So i would like to know the best practice before writting them.
Is it a good practice to write all the function in a single file?
I am thinking at least 10 functions i may need to write. Assuming each function may have 10 lines of code.
Consider modules: the simplest format is a manifest (.psd1) and a single script file (.psm1) containing all the functions, aliases, ... the module exports (plus any internal helpers).
In this case you are clearly putting multiple connected functions in one file. Even if much of the code is only dot-sourced into the script module they are still logically in one entity.
On the other hand using scripts in your path to execute without having to load before hand would tend (as per Adriano's comment to the question) to support one function (at script scope rather than a function statement) makes sense.
Therefore: there is no one "good practice": it all depends on the details of the circumstance.
Be pragmatic, truth come from action, no from words ;O)
So begin, by the beginining :
1) Does the thing you want to do exists somewhere on internet EX PoshCode (if so you can adapt it)
2) Think about your functions (not to much) object : reuse the code (write your algorith in pseudo code)
3) Use internet to look for the functions even existing
4) Wrote all functions in the same file as the main code to test them. During this phase you'll discover new functions and parameters to add or to remove from existing ones
5) Once you have tested your code, put the reusable functions (and the ones they depend on) into one or multiple module.
My solutions will be to create a custom Module where will be possible add function later.
You can save your single file with all functions as mymodule.psm1 in mymodule folder under this path $env:psmodulepath.
then add-module mymodule (or better call it in you $profile to have it ready when console is up)

How to split long Perl code into several files without too much manual editing?

How do I split a long Perl script into two or more different files that can all access the same variables - without having to rename all shared variables from e.g. $count to $::count (or $main::count which is the same)?
In other words, what's the best and simplest way to split the Perl script into several files without having to import a lot of variables/functions and/or do a lot of manual editing.
I assume it has something to do with making the code part of the same package/scope/namespace, but my experiments so far have failed.
I am not sure it makes a difference, but the script is used for web/CGI purposes and will be running under mod_perl.
EDIT - Background:
I kind of knew I would get that response. The reason I want to split up the file is the following:
Currently I have a single very old and very long Perl file. I know it is not following Perl best practices but it works.
The problem is, I need to distribute the data files it uses between different web servers, first of all for performance reasons. There will be one "master" server and one or several "slaves".
About 20% of the mentioned Perl file contains shared functions, 40% has the code need to run on the master server and 40% on the slave servers. Therefore, I would like to split the code into three files: 1. shared, 2. master-only, 3. slave-only. On the master server, 1 and 2 will be loaded, on the slaves, 1 and 3 will be loaded.
I assume this approach would use less process RAM and, more importantly, I would minimize the risk of not splitting the code correctly (e.g. a slave process calling a master data file). I don't see a great need for modularization, as the system works and the code does not need a lot of changes or exchanges with other projects.
EDIT 2 - Solution:
Found the solution I was looking for here:
http://www.perlmonks.org/?node_id=95813
In cases where the main package is in ownership of the variable, the
actual word 'main' can be ommitted to yield something like: $::var
It is possible to get around having to fully qualify variable names
when strict is in use. Applying a simply use vars to your script, with
the variable names as it arguments will get around explicit package
names.
Actually, I ended up repeating the our ($count, etc...) statement for the needed variables instead of use vars ();
Do let me know if I am missing something vital - apart from not going with modules! :)
#Axeman, Thanks, I will accept your answer, both for your effort and for sending me in the right direction.
Unless you put different package statements in their files, they will all be treated as if they had package main; at the top. So assuming that the scripts use package variables, you shouldn't have to do anything. If you have declared them with my (that is, if they are lexically scoped variables) then you would have to make sure that all references to the variables are in the same file.
But splitting scripts up for length is a rotten substitute for modularization. Yes, modularization helps keep code length down, but modularization if the proper way to keep code length down--for all the reasons that you would want to keep code-length down, modularization does it best.
If chopping the files by length could really work for you, then you could create a script like this:
do '/path/to/bin/part1.pl';
do '/path/to/bin/part2.pl';
do '/path/to/bin/part3.pl';
...
But I kind of suspect that if the organization of this code is as bad as you're--sort of--indicating, it might suffer from some of the same re-inventing the wheel that I've seen in Perl-ignorant scripts. Just offhand (I might be wrong) but I'm thinking you would be surprised how much could be chopped from the length by simply substituting better-tested Perl library idioms than for-looping and while-ing everything.

How can I control an interactive Unix application programmatically through Perl?

I have inherited a 20-year-old interactive command-line unix application that is no longer supported by its vendor. We need to automate some tasks in this application.
The most troublesome of these is creating thousands of new records with slightly different parameters (e.g. different identifiers, different names). The records have to be created in sequence, one at a time, which would take many months (and therefore dollars) to do manually. In most cases, creating a record has a very predictable pattern of keying in commands, reading responses, keying in further commands, etc. However, some record creation operations will result in error conditions ('record with this identifier already exists') that require a different set of commands to be exit gracefully.
I can see a few different ways to do this:
Named pipes. Write a Perl script that runs the target application with STDIN and STDOUT set to named pipes then sends the target application the sequence of commands to create a record with the required parameters, and then instructs the target application to exit and shut down. We then run the script as many times as required with different parameters.
Application. Find another Unix tool that can be used to script interactive programs. The only ones I have been able to find though are expect, but this does not seem top be maintained; and chat, which I recall from ages ago, and which seems to do more-or-less what I want, but appears to be only for controlling modems.
One more potential complication: I think the target application was written for a VT100 terminal and it uses some sort of escape sequences to do things like provide highlighting.
My question is what approach should I take? One of these, or something completely different? I quite like the idea of using named pipes and then having a Perl script that opens the FIFOs and reads and writes as required, as it provides a lot of flexibility, but from what I have read it seems like there's a lot of potential problems if I go down this path.
Thanks in advance.
I'd definitely stick to Perl for the extra flexibility, as chaos suggested. Are you aware of the Expect perl module? It's a lot nicer than the named pipe approach.
Note also with named pipes, you can't force the output coming back from your legacy application to be unbuffered, which could be annoying. I think Expect.pm uses pseudo-ttys to get around this problem, but I'm not sure. See the discussion in perlipc in the section "Bidirectional Communication with Another Process" for more details.
expect is a lot more solid than you're probably giving it credit for, but if I were you I'd still go with the Perl option, wanting to have a full and familiar programming language for managing the process and having confidence that whatever weird issues arise, there will be ways of addressing them.
Expect, either with the Tcl or Perl implementations, would be my first attempt. If you are seeing odd sequences in the output because it's doing odd terminal things, just filter those from the output before you do your matching.
With named pipes, you're going to end up reinventing Expect anyway.

How can I force unload a Perl module?

Hi I am using a perl script written by another person who is no longer in the company.
If I run the script as a stand alone, then the output are as expected. But when I call the script from another code repeatedly, the output is wrong except for the first time.
I suspect some variables are not initialised properly. When it is called standalone, each time it exits and all the variable values are initialised to defaults. But when called from another perl script, the modules and the variable values are probably carried over to the next call of the script.
Is there any way to flush out the called script from memory before I call it next time?
I tried enabling warning and it was throwing up 1000s of lines of warning...!
EDIT: How I am calling the other script:
The code looks like this:
do "processing.pl";
...
...
...
process(params); #A function in processing.pl
...
...
...
If you want to force the module to be reloaded, delete its entry from %INC and then reload it.
For example:
sub reload_module {
delete $INC{'Your/Silly/Module.pm'};
require Your::Silly::Module;
Your::Silly::Module->import;
}
Note that if this module relies on globals in other modules being set, those may need to be reloaded as well. There's no easy way to know without taking a peak at the code.
Hi I am using a perl script written by another person who is no longer in the company.
I tried enabling warning and it was throwing up 1000s of lines of warning...!
There's your problem right there. The script was not written properly, and should be rewritten.
Ask yourself this question: if it has 1000s of warnings when you enable strict checking, how can you be sure that it is doing the right thing? How can you be sure that it is not clobbering files, trashing data sets, making a mess of your filesystem? Chances are it is doing all of these things, either deliberately or accidentally.
I wouldn't trust running an error-filled script written by someone no longer with the company. I'd rewrite it and be sure that it was doing what I needed it to do.
Unloading a module is a more difficult task than simply removing the %INC entry of the module. Take a look at Class::Unload from CPAN.
If you don't want to rewrite/fix the script, I suggest calling the script via exec() or one of its varieties. While it is not very elegant to do, it will definitely fix your problem.
Are you sure that you need to reload the module? By using do, you are reading the source every time and executing it. What happens if you change that to require, which will only read and evaluate the source once?
Another possibility (just thinking aloud here) could be to do with the local directory? Are they running from the same place. Probably wouldn't work the first time though.
Another option is to use system ('doprocessing.pl');. Lazily, we do this with a few scripts to force re-initialisation of a number of classes/variables etc. And to force the log files to rotate properly.
edit: I have just re-read your question, and it would appear that you are not calling it like this.