Powershell: is there some hacky subroutine implementation? - powershell

I was wondering if there is some way to call (or load) a function located lower in the script execution path.
I wrote a script to run deployment and as one of the last steps, the script parses web.config making a ton of changes based on configuration file. A feature request came in, asking for a switch to generate the web.config without actual deployment.
The only way I can think of doing it, is making all the parsing logic into a gigantic function, and loading it at the start of the script. However, that approach will make the script horribly ugly. Nor do I want to carve out all the logic into another script and dot sourcing it.
Any suggestions?
Thank you.

Make it 2 functions one for deploy one for webconfig use a seperate function to check for switches and call functions based on those variables.
dc

I ended up reading another 30 or so articles and decided to simply bracket the functionality and move it up higher in the script, then dot source the function from inside the script.
Thanks.

Related

Is there a simpler way to check if multiple files have been modified?

I am working on a project with around 40 script files and I am going to package the scripts to distribute them to my clients (kind of like a version release). I don't want my clients to change my scripts (at least make it hard for them to change).
I have made certain files Read Only by setting the execution policy but the clients can simply set it back to writable so I want to add in a few lines of code (preferably less than 5) to check that the scripts are not modified.
I am aware of using property LastWriteTime will do it but I will need to do this for each of the script (a hash table to keep track of the LastWriteTime for each file will be to long and not clean enough) which is not ideal.
I have also considered Get-FileHash but I am concerned about the hash code will change each time I run it.
As you already have realized, it is impossible to prevent clients from modifying scripts in water-tight a way. Bruce Schneier sums it up nicely: "Trying to make bits uncopyable is like trying to make water not wet."
To run a script, one needs to copy it at least in system's memory - and at that point you've lost control. What's to prevent copying the script in alternate a location and editing it before running? Nothing, unless you have tight control on client. Should you have tight control, setting execution policy to signed prevents running unsigned scripts. Until the client starts Powershell from command line with -Executionpolicy Bypass switch. The execution policy isn't a security system that restricts user actions .
There are a few approaches that can hinder editing, but a determined hacker can overcome those. So the root question is: why? Why shouldn't the clients modify the scripts? Is it to protect some IP? Are they trying to achieve something the scripts are not designed to? Something else?
A simple solution is to use a tool like PS2EXE that converts Powershell script as an executable. The contents can be extracted and modified, but it requires at least a bit more effort than running Notepad.
Another approach would be modules. Distribute the scripts as a Powershell module that the clients will import. Editing a module requires a bit more effort than editing a simple script file, but is quite possible too.

MATLAB doesn't find files I downloaded while the script is running

My problem is as described. My script downloads files through an external call to cmd (using the system function and then .NET to make keypresses). The issue is that when it tries to fopen these files I downloaded (filenames from a text file I write as I download), it doesn't find them, causing an error. When I run the script again after seeing it fail, it works but only up to the point where it's trying to download/call new files again, where it runs into the same problem.
Are new files downloaded during when a script is running somehow not visible to the search path? Because the folder is most definitely in my search path (seeing as it works outside of during-script downloads). It's not that it isn't getting the files fast enough either, cause they appear in my folder almost instantly, and I've tried a delay to allow for it to recognize it, but that didn't work either.
I'm not sure if it's important to note that the script calls an external function which tries to read the files from the .txt list I create in the main script.
Any ideas?
The script to download the files looks like so:
NET.addAssembly('System.Windows.Forms');
sendkey = #(strkey) System.Windows.Forms.SendKeys.SendWait(strkey);
system('start cygwinbatch.bat')
pause(.1)
sendkey(callStr1)
sendkey('{ENTER}')
pause(.1)
sendkey(callStr2)
sendkey('{ENTER}')
pause(.1)
sendkey('exit')
pause(2)
sendkey('{ENTER}')
But that is not the main reason I am asking: I am confident that the downloads are occurring when the script calls them, because I see them appearing in my folder as it called. I am more confused as to why MATLAB doesn't seem to know they are there while the script is running, and I have to stop it and run it again for it to recognize the ones I've downloaded already.
Thank you,
Aaron
The answer here is probably to run the 'rehash' function. Matlab does not look for new files while executing an operation, and in some environments misses new files even during interactive activity.
Running the rehash function forces Matlab to search through its full path and determine if there are any new files.
I've never tried to run rehash in the middle of an operation though. ...
My guess is that the MATLAB interpreter is trying to look ahead and is throwing errors based on a snapshot of what the filesystem looked like before the files were downloaded. Do you get different behavior if you run it one line at a time using F9? If that's the case then you may be able to prevent the interpreter from looking ahead by using eval().

Run a PowerShell script from another one

What is the best and correct way to run a PowerShell script from another one?
I have a script a.ps1 from which I want to call b.ps1 which does different task.
Let me know your suggestions. Is dot sourcing is the best option here?
Dot sourcing will run the second script as if it is part of the caller—all script scope changes will affect the caller. If this is what you want then dot-source,
However it is more usual to call the other script as if it were a function (a script can use param and function level attributes just like a function). In many ways a script is a PowerShell function, with the name of the file replacing the naming of the function.
Dot sourcing makes it easier to at a later stage convert your script(s) into a module, you won't have to change the script(s) into functions.
Another advantage of dot sourcing is that you can add the function to your shell by adding the file that holds the functions to Microsoft.PowerShell_profile.ps1, meaning you have them available at all times (eliminating the need to worry about paths etc).
I have a short write-host at the top of my dot sourced files with the name of the function and common parameters and I dot source the functions in my profile. Each time I open PowerShell, the list of functions in my profile scrolls by (If like me, you frequently forget the exact names of your functions/files You'll appreciate this as over time as the number of functions start to pile up).
Old but still relevant.
I work with modules with "Import-Module ", this will import the module in the current powershell session.
To avoid keep in cache and to always have the last changes from the module I put a "Get-Module | Remove-Module" that will clear all the loaded modules in the current session.
Get-Module | Remove-Module
Import-Module '.\IIS\Functions.psm1'

Configuration Key Value Store

I'm in the planning stages of a script/app that I'm going to need to write soon. In short, I'm going to have a configuration file that stores multiple key value pairs for a system configuration. Various applications will talk to this file including python/shell/rc scripts.
One example case would be that when the system boots, it pulls the static IP to assign to itself from that file. This means it would be nice to quickly grab a key/value from this file in a shell/rc script (ifconfig `evalconffile main_interface` `evalconffile primary_ip` up), where evalconffile is the script that fetches the value when provided with a key.
I'm looking for suggestions on the best way to approach this. I've tossed around the idea of using a plain text file and perl to retrieve the value. I've also tossed around the idea of using YAML for the configuration file since there may end up being a use case where we need multiple values for a key and general expansion. I know YAML would make it accessible from python and perl, but I'm not sure what the best way to access it from a quickly access it from a shell/rc script would be.
Am I headed in the right direction?
One approach would be to simply do the YAML as you wanted, and then when a shell/RC wants a key/value pair, they would call a small Perl script (the evalconffile in your example) that would parse YAML on the shell script's behalf and print out the value(s)
SQLite will give you greatest flexibility, since you don't seem to know the scope of what will be stored in there. It appears there's support for it in all scripting languages you mentioned.

Please help me with a Power shell Script which rearranges Paths

I have both Sybase and MSFT SQL Servers installed. There is a time when Sybase interferes with MS SQL because they have they have some overlapping commands.
So, I need two scripts:
A) When runs, script A backs up the current path, grabs all paths that contain sybase or SYBASE or SyBASE (you get the point) in them and move them all at the very end of the path, while preserving the order.
B) When it runs, script B restores the path from back-up.
Both script a and script b should affect the path immediately. So, if a.bat that calls patha.ps1, pathb.ps1 looks like so:
#REM Old path here
call patha.ps1
#REM At this point the effective path should be different.
call pathb.ps1
#REM Effective old path again
Please let me know if this does not make sense. I am not sure if call command is the best one to use.
I have never used P.S. before. I can try to formulate the same thing in Python (I know S.O. users tend to ask for "What have you tried so far"). Well, at this point I am VERY slow at writing anything in Power Shell language.
Please help.
First of all: call will be of no use here as you are apparently writing a batch file and PowerShell scripts have no association to run them by default. call is for batch files or subroutines.
Secondly, any PowerShell script you call from a batch file cannot change environment variables of the caller's environment. That's a fundamental property of how processes behave and since you are calling another process, this is never going to work.
I'm not so sure why you are even using a batch file here in the first place if you have PowerShell. You might just as well solve this in PowerShell completely.
However, what I get from your problem is that the best way to resolve this is probably the following: Create two batch files that each set the PATH appropriately. You can probably leave out both the MSSQL and Sybase paths from your usual PATH and add them solely in the batch files. Then create shortcuts to
cmd /k set_mssql_path.cmd
and
cmd /k set_sybase_path.cmd
each of which now is a shortcut to a shell to work with the appropriate database's tools. This is how the Visual Studio Command Prompt works and it's probably the cleanest solution you have. You can use the color and prompt commands in those batches to make the two different shells distinct so you always know what environment you have. For example the following two lines will color the console white on blue and set a prompt indicating MSSQL:
color 1f
prompt MSSQL$S$P$G
This can be quite handy, actually.
Generally, trying to rearrange the PATH environment variable isn't exactly easy. While you could trivially split at a ; this will fail for paths that itself contain a semicolon (and which need to be quoted then). Even in PowerShell this will take a while to get right so I think creating shortcuts specific to the tools is probably the nicest way to deal with this.