I have found all sorts of references to using PowerShell to change the active power plan, and I have found instructions for manually creating a new power plan, but I can't seem to find anything about using Powershell to automate the creation of a new plan. Is this something that can be done, and I need to keep looking? Or am I not finding it because it can't be done?
And, a little context, I am automating the setup of lab machines for a three day conference. The machines come from various vendors, and I have no idea nor control over what settings their Windows image is going to provide. Usually laptops are set to power down the screen at 10-15 minutes, which is crazy for a lab, where you will often go more than that listening to instruction, then when you go to try something you need a password. My goal is to have a script create a new power plan with the settings I want, and then a second script that makes that plan current for the user. I also need to make this work in PSv2 as 99% of the time we get Windows 7, and I am not in a position to demand a PS update. Eventually we will automate the OS install too, and eliminate a few more variables, but for now we are working with the OS image we get.
Apparently you need to wrap powercfg calls in your script to produce a power plan modification. One good thing that you can call powercfg -import <file> <GUID>, and you can prepare the file by setting correct parameters on a test PC and call powercfg -export with a given plan. So you just create a .bat file with a power plan export result, and call that at startup to set the power plan. You can also modify current power plan by calling powercfg -x. See powercfg -? for details.
Well I had to look into a few different places to solve exactly this problem and I came up with this following little script that does exactly this but using a batch file instead
#ECHO OFF
SET "src_profile=High performance"
SET "new_profile=DAW Optimised"
:CREATE_PLAN
REM Create new power plan based on the existing one specified, rename it and specify settings
echo Setting up new power plan
for /f "tokens=4" %%f in ('powercfg /list ^| findstr /C:"%src_profile%"') do set GUID=%%f
for /f "tokens=4" %%I in ('powercfg -duplicatescheme %GUID%') do set dest_GUID=%%I
powercfg /changename %dest_GUID% "%new_profile%"
powercfg /setactive %dest_GUID%
I'm sure that conference is long over but hopefully this helps someone in a similar situation
Related
I'm trying to write a PowerShell script to automate some scanning activities using Windows Defender. I've noticed a limitation with the published code which I'm interested to know whether or not there is a workaround.
Is there any reason why when you run this:
Start-MpScan -ScanType CustomScan -ScanPath "C:\Files"
That the scan does not get added into the event viewer?
I need this because I need a way to keep a log of what files were scanned?
If I could output the results of scan directly from PowerShell that would be even better but I don’t believe this function returns anything.
Any pointers appreciated.
For normal use, I want my script to output a lot of Write-Host to highlight information with colour (I am aware of how -Host commands can create some confusion beside pipeline commands as -Host commands just push to the console, but I am ok with that). In my script, there are a few hundred Write-Host statements and I have simple logic to make the script attended or unattended using some choice options along the way, but I realised that at a certain point I might want to make it completely silent, ideally without having to delete all of the Write-Host statements.
So, might there be a way to put in some kind of directive to a script to make it globally ignore all Write-Host commands (or for selected regions) and then my script can run completely silently when I want it to?
I am working on a project with around 40 script files and I am going to package the scripts to distribute them to my clients (kind of like a version release). I don't want my clients to change my scripts (at least make it hard for them to change).
I have made certain files Read Only by setting the execution policy but the clients can simply set it back to writable so I want to add in a few lines of code (preferably less than 5) to check that the scripts are not modified.
I am aware of using property LastWriteTime will do it but I will need to do this for each of the script (a hash table to keep track of the LastWriteTime for each file will be to long and not clean enough) which is not ideal.
I have also considered Get-FileHash but I am concerned about the hash code will change each time I run it.
As you already have realized, it is impossible to prevent clients from modifying scripts in water-tight a way. Bruce Schneier sums it up nicely: "Trying to make bits uncopyable is like trying to make water not wet."
To run a script, one needs to copy it at least in system's memory - and at that point you've lost control. What's to prevent copying the script in alternate a location and editing it before running? Nothing, unless you have tight control on client. Should you have tight control, setting execution policy to signed prevents running unsigned scripts. Until the client starts Powershell from command line with -Executionpolicy Bypass switch. The execution policy isn't a security system that restricts user actions .
There are a few approaches that can hinder editing, but a determined hacker can overcome those. So the root question is: why? Why shouldn't the clients modify the scripts? Is it to protect some IP? Are they trying to achieve something the scripts are not designed to? Something else?
A simple solution is to use a tool like PS2EXE that converts Powershell script as an executable. The contents can be extracted and modified, but it requires at least a bit more effort than running Notepad.
Another approach would be modules. Distribute the scripts as a Powershell module that the clients will import. Editing a module requires a bit more effort than editing a simple script file, but is quite possible too.
I have an InvokeProcess activity that I am trying to grab the output from (for example):
'sc.exe query w3svc'.
which queries as to whether the IIS service is installed or not.
I am using an Assign activity to try and capture the stdOutput into a variable so I can use it in the next step. The problem is it only captures the first line. The output from this command contains crlfs which I think it the problem. I have tried various ways to remove them but to no avail.
Any ideas on this one?
What I ended up doing was:
1) Moved my command into a batch file and tuned it to return just a more specific result, for example:
sc.exe query w3svc | find /c /i "w3svc"
which returns a 0 or a 1
2) Moved the batch file into source control.
3) Added a DownloadFiles activity to my template and I download the batch right before I need to use it.
4) In the subsequent InvokeProcess I add an Assign to capture the stdOutput and then check that in an If activity after that.
Hope this helps someone as I know there isn't a whole lot on this stuff out there.
I have both Sybase and MSFT SQL Servers installed. There is a time when Sybase interferes with MS SQL because they have they have some overlapping commands.
So, I need two scripts:
A) When runs, script A backs up the current path, grabs all paths that contain sybase or SYBASE or SyBASE (you get the point) in them and move them all at the very end of the path, while preserving the order.
B) When it runs, script B restores the path from back-up.
Both script a and script b should affect the path immediately. So, if a.bat that calls patha.ps1, pathb.ps1 looks like so:
#REM Old path here
call patha.ps1
#REM At this point the effective path should be different.
call pathb.ps1
#REM Effective old path again
Please let me know if this does not make sense. I am not sure if call command is the best one to use.
I have never used P.S. before. I can try to formulate the same thing in Python (I know S.O. users tend to ask for "What have you tried so far"). Well, at this point I am VERY slow at writing anything in Power Shell language.
Please help.
First of all: call will be of no use here as you are apparently writing a batch file and PowerShell scripts have no association to run them by default. call is for batch files or subroutines.
Secondly, any PowerShell script you call from a batch file cannot change environment variables of the caller's environment. That's a fundamental property of how processes behave and since you are calling another process, this is never going to work.
I'm not so sure why you are even using a batch file here in the first place if you have PowerShell. You might just as well solve this in PowerShell completely.
However, what I get from your problem is that the best way to resolve this is probably the following: Create two batch files that each set the PATH appropriately. You can probably leave out both the MSSQL and Sybase paths from your usual PATH and add them solely in the batch files. Then create shortcuts to
cmd /k set_mssql_path.cmd
and
cmd /k set_sybase_path.cmd
each of which now is a shortcut to a shell to work with the appropriate database's tools. This is how the Visual Studio Command Prompt works and it's probably the cleanest solution you have. You can use the color and prompt commands in those batches to make the two different shells distinct so you always know what environment you have. For example the following two lines will color the console white on blue and set a prompt indicating MSSQL:
color 1f
prompt MSSQL$S$P$G
This can be quite handy, actually.
Generally, trying to rearrange the PATH environment variable isn't exactly easy. While you could trivially split at a ; this will fail for paths that itself contain a semicolon (and which need to be quoted then). Even in PowerShell this will take a while to get right so I think creating shortcuts specific to the tools is probably the nicest way to deal with this.