Need PowerShell help! Very strange things are happening - powershell

So I'm using PowerShell to manipulate a SharePoint 2010 library. I am uploading, downloading, and deleting files in a script using a custom module I made. My errors are so odd I can't understand them.I am using PowerGUI, Windows PowerShell ISE, and PowerShell Management Shell all in admin mode.
PowerGUI:
I sometimes can't get an spWeb object, sometimes I can. The URL string is being pulled from a CSV file so it never changes and neither does the code before I call Get-SPWeb -Identity $correctURL
Sometimes when I call a list RootFolder it returns $false for the Exists property, using management shell I can get past this. Otherwise I can touch it by calling $ListName.RootFolder.Files and it will magically return and hold the $true for Exists in future executions of my script.
Then when I call an XML file full of file properties(for uploaded files) it will return file property names for $fileFieldsXML.row.Attributes | foreach {$_} and values for $fileFieldsXML.row.Attributes | foreach {$_.ToString()}. This is, unless I set them to variables. When two very distinct vars are set to these two differentish calls they both are set to the array of property names! Why??
Windows PowerShell ISE and PowerShell Management Shell
I think these are just outdated somehow. I can call Get-SPWeb in Management Shell but I can't in ISE due to I guess outdated versions. Lately the management shell will act as if I haven't been doing anything to the files unless I close it out and reopen it. Does the management shell just hold a copy of all files when it starts or something? Can I make it update these files?
Can anyone suggest a better way to debug? Also why does a module seem to severely increase runtime? When everything was in the same script it was quick but my long functions take several times longer to execute now.
I also have been using PowerShell and SharePoint for almost two months now, so I am a beginner and intern. Perhaps that is really the cause of my problems :)

Related

Powershell: Detecting that a specifically opened program is running (and closing it)

I'm trying to automate a workflow. The automation script is mainly written in Powershell It consists of these steps: 1) Opening a program 2) Communicating with the API, reading values, etc. 3) Closing the program. This script will be run many times a day, it would suffice to not close the program every time the script is finishing, but rather check at the beginning of the script whether the program is already opened, and if not, open it. I'd like to implement both, then decide which solution to use later on.
The code for opening the program is completed, but it's not enough to just run an .exe file to open the program, as I have to load the correct settings and GUI, for this while opening the .exe file from the command line, additionally, I have to use -s, also -c. I concluded all this in runProgram.cmd, so in the Powershell script, I only run this file to open the program. However, I am unsure how the already opened program can be detected (that it's opened), and how can I close it. I believe a solution might use processes, with the help of Get-Process, but I'm unsure of its capabilities and limitations (how do I check if my program's process is not amongst the list of running processes?), and whether there is a better way of dealing with this problem.
I have found the solution:
Open the program and open Powershell, and type Get-Process (this will list all the currently running processes)
Search yours (by name). If you don't know which process is the one you're looking for, you can close your program, then type Get-Process again, and look for the process that disappeared from the list, since you closed it. Let's assume the name of it is "yourprocess".
In the code, type $val = Get-Process -Name yourprocess. If it is running, $val should equal some data about the process, if it is not running, then $val is 0. Therefore, if you want to check whether it's opened, you should use:
if($null -ne $val){...}
Finally, stopping the process: Stop-Process -Name yourprocess.

Powershell & Windows Defender limitations

I'm trying to write a PowerShell script to automate some scanning activities using Windows Defender. I've noticed a limitation with the published code which I'm interested to know whether or not there is a workaround.
Is there any reason why when you run this:
Start-MpScan -ScanType CustomScan -ScanPath "C:\Files"
That the scan does not get added into the event viewer?
I need this because I need a way to keep a log of what files were scanned?
If I could output the results of scan directly from PowerShell that would be even better but I don’t believe this function returns anything.
Any pointers appreciated.

What happens when powershell script encounters EOF while a quote is open?

Unicorn.py generates a string that looks like
powershell -flag1 -flag2 "something " obfuscation; powershell "more gibbrish
Interestingly, if this command is saved in a file filename.txt Windows executes it before opening the file in notepad.txt (by which time the file is empty).
Why is the file executed despite the extension?
What does the script do when it encounters EOF after odd number of quotation marks?
Edited:
Unicorn (https://github.com/trustedsec/unicorn) is a script that "enables privilege elevation and arbitrary code execution". If you know what it means. Of course I did NOT put the actual string, just the key features.
Purely out of IT security interest.
I think that if you read the manual in unicorn.py, at absolutely no time does it say that the script should be left in the txt file.
The PowerShell script is written inside the txt file and called the "payload" (very hacker like). What is left for you is always how to execute this code on the victim's computer.
The manual proposes Word code injection, simply executing the PowerShell in cmd (I quote "Next simply copy the powershell command to something you have the ability for remote command execution."), Excel Auto_Open attack, and so on.
If reading the manual is too much there is always a video. The only time the "hacker" uses a notepad like is on his linux operated system (how ironic)… I watched it because I love this Papa Roach music Last Resort...
For those who are concerned about IT security I recommend this article dosfuscation. This is really instructive about how you have to be extra careful when receiving mails, outside document,... and how humanity can waste so much time spying, deceiving, inventing new twisted strategies... Aren't we great !
Windows like any other system has many system flaw but opening notepad is not one of them. Unless your notepad has been replaced by a hacker using unicorn…
There is an even number of brackets in the obfuscated script. Did you mix up '' with "?
Empty txt file means that you've sent the attack.txt over network to a drive accessible by updated antivirus and antivirus quarantined/deleted file contents. Since you didn't know about this interaction with antivirus your environment is NOT secure. Which means you might have other malware from previous test lurking on your "clean" network.

Run a PowerShell script from another one

What is the best and correct way to run a PowerShell script from another one?
I have a script a.ps1 from which I want to call b.ps1 which does different task.
Let me know your suggestions. Is dot sourcing is the best option here?
Dot sourcing will run the second script as if it is part of the caller—all script scope changes will affect the caller. If this is what you want then dot-source,
However it is more usual to call the other script as if it were a function (a script can use param and function level attributes just like a function). In many ways a script is a PowerShell function, with the name of the file replacing the naming of the function.
Dot sourcing makes it easier to at a later stage convert your script(s) into a module, you won't have to change the script(s) into functions.
Another advantage of dot sourcing is that you can add the function to your shell by adding the file that holds the functions to Microsoft.PowerShell_profile.ps1, meaning you have them available at all times (eliminating the need to worry about paths etc).
I have a short write-host at the top of my dot sourced files with the name of the function and common parameters and I dot source the functions in my profile. Each time I open PowerShell, the list of functions in my profile scrolls by (If like me, you frequently forget the exact names of your functions/files You'll appreciate this as over time as the number of functions start to pile up).
Old but still relevant.
I work with modules with "Import-Module ", this will import the module in the current powershell session.
To avoid keep in cache and to always have the last changes from the module I put a "Get-Module | Remove-Module" that will clear all the loaded modules in the current session.
Get-Module | Remove-Module
Import-Module '.\IIS\Functions.psm1'

Please help me with a Power shell Script which rearranges Paths

I have both Sybase and MSFT SQL Servers installed. There is a time when Sybase interferes with MS SQL because they have they have some overlapping commands.
So, I need two scripts:
A) When runs, script A backs up the current path, grabs all paths that contain sybase or SYBASE or SyBASE (you get the point) in them and move them all at the very end of the path, while preserving the order.
B) When it runs, script B restores the path from back-up.
Both script a and script b should affect the path immediately. So, if a.bat that calls patha.ps1, pathb.ps1 looks like so:
#REM Old path here
call patha.ps1
#REM At this point the effective path should be different.
call pathb.ps1
#REM Effective old path again
Please let me know if this does not make sense. I am not sure if call command is the best one to use.
I have never used P.S. before. I can try to formulate the same thing in Python (I know S.O. users tend to ask for "What have you tried so far"). Well, at this point I am VERY slow at writing anything in Power Shell language.
Please help.
First of all: call will be of no use here as you are apparently writing a batch file and PowerShell scripts have no association to run them by default. call is for batch files or subroutines.
Secondly, any PowerShell script you call from a batch file cannot change environment variables of the caller's environment. That's a fundamental property of how processes behave and since you are calling another process, this is never going to work.
I'm not so sure why you are even using a batch file here in the first place if you have PowerShell. You might just as well solve this in PowerShell completely.
However, what I get from your problem is that the best way to resolve this is probably the following: Create two batch files that each set the PATH appropriately. You can probably leave out both the MSSQL and Sybase paths from your usual PATH and add them solely in the batch files. Then create shortcuts to
cmd /k set_mssql_path.cmd
and
cmd /k set_sybase_path.cmd
each of which now is a shortcut to a shell to work with the appropriate database's tools. This is how the Visual Studio Command Prompt works and it's probably the cleanest solution you have. You can use the color and prompt commands in those batches to make the two different shells distinct so you always know what environment you have. For example the following two lines will color the console white on blue and set a prompt indicating MSSQL:
color 1f
prompt MSSQL$S$P$G
This can be quite handy, actually.
Generally, trying to rearrange the PATH environment variable isn't exactly easy. While you could trivially split at a ; this will fail for paths that itself contain a semicolon (and which need to be quoted then). Even in PowerShell this will take a while to get right so I think creating shortcuts specific to the tools is probably the nicest way to deal with this.