We are building a bigger project with PowerShell (a collection of scripts/functions that helps us setup some SharePoint environments/tenants).
A lot of the functions should reuse settings that are stored in a single, central place.
I couldn't find a "best practice" of how such a settings file/location should be best created and structured.
My idea is to store global settings in a separate file (a module file), for example Settings.psm1 with content like this:
# Set vars
$global:scriptEnvironment = "SP2016HOSTINGDEV"
$global:logFileName = "z_Migration_to_SP2016.log"
$global:languageMapping = #{
"en-US" = 1;
"de-DE" = 2;
}
$global:oldWsps = #(
[WspFile]#{ Filename = "comapany.solution.wsp"; IsDeployable = $true; IsGloballyDeployable = $false; FullTrustBinDeployment = $false },
[WspFile]#{ Filename = "company.solution2.server.wsp"; IsDeployable = $true; IsGloballyDeployable = $false; FullTrustBinDeployment = $false }
)
...
And in the other modules/scripts I then could always include those settings like this:
# Set vars
$scriptDirectory = Split-Path -parent $PSCommandPath
# Module import
Import-Module (Join-Path $scriptDirectory Settings.psm1) -Force -ErrorAction Stop
Import-Module (Join-Path $scriptDirectory Functions.psm1) -Force -ErrorAction Stop
# Functions
...
So this would make me able to use the global settings like this in the functions inside other script files:
Function WriteLog
{
param
(
[System.String]$LogString = "",
[System.String]$LogLevel = ""
)
WriteLogToPath -LogFileName $global:logFileName -LogString $LogString -LogLevel $LogLevel
}
Is this a good approach? Or shouldn't I use module files for this and if not what kind of files should I use instead?
I would probably collect all your scripts/functions in a module and use the registry to store the global settings. Read the values from the registry when the module is loaded, and have variables with default values for each setting in your module so that you can write missing values to the registry.
Something like this:
$modulename = Split-Path $PSScriptRoot -Leaf
$default_foo = 'something'
$default_bar = 'or other'
...
function Get-RegistrySetting($name) {
$path = "HKCU:\Software\${script:modulename}"
if (-not (Test-Path -LiteralPath $path)) {
New-Item -Path $path -Force | Out-Null
}
try {
Get-ItemProperty $path -Name $name -ErrorAction Stop |
Select-Object -Expand $name
} catch {
$val = Get-Variable -Scope script -Name "default_${name}" -ValueOnly -ErrorAction Stop
Set-ItemProperty $path -Name $name -Value $val
$val
}
}
$global:foo = Get-RegistrySetting 'foo'
$global:bar = Get-RegistrySetting 'bar'
...
For variables you only use inside your module you may want to use the script scope instead of the global scope.
I would avoid using the registry personally. I agree about using modules though. My approach would be to use a manifest module (i.e. using a .psd1 file which essentially is a key-value hashtable containing metadata about a module), and specifying a 'root' module with the RootModule key.
The module scope is now set by this RootModule, and you can define your variables in there.
You could separate out your functions into 'nested' modules (another manifest file key), and these are loaded automatically by PowerShell into the rootmodule scope, so should have access to those variables.
You can even control what variables and functions are exported using keys in that manifest file as well.
Check the Get-Configuration Powershell module. Concept is easy, module adds Environment variable in which (in JSON) definition of type and source is saved.
(dir env:PSGetConfiguration).Value
{
"Mode": "Xml",
"XmlPath": "C:\\Managers\\PSGetConfiguration.xml"
}
Configuration file is very simple and contains just key and value items
cat $(Get-ConfigurationSource).XmlPath
<Configuration>
<conf key="ExampleKey" value="ExampleValue" />
<conf key="key" value="Value123" />
<conf key="remove" value="remove" />
</Configuration>
Module expose two main functions Get-Configuration and Set-Configuration
Set-Configuration -Key k1 -Value v1
Get-Configuration -Key k1
v1
At start module saves XML in module directory it can be changed by manually changing the environment variable or using command Set-XmlConfigurationSource
SQL Configuration
By default module uses XML to store data, but second option is to store data in the SQL. Setting configuration is pretty easy:
Set-SqlConfigurationSource -SqlServerInstance ".\sql2017" -SqlServerDatabase "ConfigDatabase" -SqlServerSchema "config" -SqlServerTable "Configuration"
After this our config will be stored in SQL table.
Code is also available in github.
Related
On Windows, not counting ISE or x86, there are four (4) profile scripts.
AllUsersAllHosts # C:\Program Files\PowerShell\6\profile.ps1
AllUsersCurrentHost # C:\Program Files\PowerShell\6\Microsoft.PowerShell_profile.ps1
CurrentUserAllHosts # C:\Users\lit\Documents\PowerShell\profile.ps1
CurrentUserCurrentHost # C:\Users\lit\Documents\PowerShell\Microsoft.PowerShell_profile.ps1
On Linux with pwsh 6.2.0 I can find only two locations.
CurrentUserAllHosts # ~/.config/powershell/Microsoft.PowerShell_profile.ps1
CurrentUserCurrentHost # ~/.config/powershell/profile.ps1
Are there any "AllUsers" profile scripts on Linux? If so, where are they?
tl;dr (also applies to Windows):
The conceptual about_Profiles help topic describes PowerShell's profiles (initialization files).
The automatic $PROFILE variable contains a string that is the path of the initialization file for the current user and the current PowerShell host environment (typically, the terminal a.k.a console).
Additional profile files are defined - along the dimensions of (a) all-users vs. current-user and (b) all host environments vs. the current one - which are exposed via properties that the $PROFILE string variable is decorated with, which makes them nontrivial to discover - see below.
None of the profile files exist by default, and in some case even their parent directories may not; the bottom section of this answer shows programmatic on-demand creation and updating of the $PROFILE file.
Olaf provided the crucial pointer in comment:
$PROFILE | select * # short for: $profile | Select-Object -Property *
shows all profile file locations, whether or not the individual profile files exist.
E.g., on my Ubuntu machine with PowerShell installed in /home/jdoe/.powershell, I get:
AllUsersAllHosts : /home/jdoe/.powershell/profile.ps1
AllUsersCurrentHost : /home/jdoe/.powershell/Microsoft.PowerShell_profile.ps1
CurrentUserAllHosts : /home/jdoe/.config/powershell/profile.ps1
CurrentUserCurrentHost : /home/jdoe/.config/powershell/Microsoft.PowerShell_profile.ps1
Length : 62
Note the presence of the [string] type's native Length property, which you could omit if you used $PROFILE | select *host* instead.
That you can get the profile locations that way is not obvious, given that $PROFILE is a string variable (type [string]).
PowerShell decorates that [string] instance with NoteProperty members reflecting all profile locations, which is why select (Select-Object) is able to extract them.
Outputting just $PROFILE - i.e. the string value - yields /home/jdoe/.config/powershell/Microsoft.PowerShell_profile.ps1, i.e. the same path as its CurrentUserCurrentHost property, i.e. the path of the user-specific profile file specific to the current PowerShell host environment (typically, the terminal aka console).[1]
You can verify the presence of these properties with reflection as follows, (which reveals their values too):
$PROFILE | Get-Member -Type NoteProperty
This means that you can also use regular property access and tab completion to retrieve individual profile locations; e.g.:
# Use tab-completion to find a specific profile location.
# Expands to .Length first, then cycles through the profile-location properties.
$profile.<tab>
# Open the all-users, all-hosts profiles for editing.
# Note: Only works if the file already exists.
# Also, you must typically run as admin to modify all-user profiles.
Invoke-Item $profile.AllUsersAllHosts
Convenience functions for getting profile locations and opening profiles for editing:
The code below defines:
Get-Profile enumerates profiles, showing their location and whether they exist on a given machine.
Edit-Profile opens profile(s) for editing (use -Force to create them on demand); note that modifying all-user profiles typically requires running as admin.
function Get-Profile {
<#
.SYNOPSIS
Gets the location of PowerShell profile files and shows whether they exist.
#>
[CmdletBinding(PositionalBinding=$false)]
param (
[Parameter(Position=0)]
[ValidateSet('AllUsersAllHosts', 'AllUsersCurrentHost', 'CurrentUserAllHosts', 'CurrentUserCurrentHost')]
[string[]] $Scope
)
if (-not $Scope) {
$Scope = 'AllUsersAllHosts', 'AllUsersCurrentHost', 'CurrentUserAllHosts', 'CurrentUserCurrentHost'
}
foreach ($thisScope in $Scope) {
[pscustomobject] #{
Scope = $thisScope
FilePath = $PROFILE.$thisScope
Exists = (Test-Path -PathType Leaf -LiteralPath $PROFILE.$thisScope)
}
}
}
function Edit-Profile {
<#
.SYNOPSIS
Opens PowerShell profile files for editing. Add -Force to create them on demand.
#>
[CmdletBinding(PositionalBinding=$false, DefaultParameterSetName='Select')]
param (
[Parameter(Position=0, ValueFromPipelineByPropertyName, ParameterSetName='Select')]
[ValidateSet('AllUsersAllHosts', 'AllUsersCurrentHost', 'CurrentUserAllHosts', 'CurrentUserCurrentHost')]
[string[]] $Scope = 'CurrentUserCurrentHost'
,
[Parameter(ParameterSetName='All')]
[switch] $All
,
[switch] $Force
)
begin {
$scopes = New-Object Collections.Generic.List[string]
if ($All) {
$scopes = 'AllUsersAllHosts', 'AllUsersCurrentHost', 'CurrentUserAllHosts', 'CurrentUserCurrentHost'
}
}
process {
if (-not $All) { $scopes.Add($Scope) }
}
end {
$filePaths = foreach ($sc in $scopes) { $PROFILE.$sc }
$extantFilePaths = foreach ($filePath in $filePaths) {
if (-not (Test-Path -LiteralPath $filePath)) {
if ($Force) {
if ((New-Item -Force -Type Directory -Path (Split-Path -LiteralPath $filePath)) -and (New-Item -Force -Type File -Path $filePath)) {
$filePath
}
} else {
Write-Verbose "Skipping nonexistent profile: $filePath"
}
} else {
$filePath
}
}
if ($extantFilePaths.Count) {
Write-Verbose "Opening for editing: $extantFilePaths"
Invoke-Item -LiteralPath $extantFilePaths
} else {
Write-Warning "The implied or specified profile file(s) do not exist yet. To force their creation, pass -Force."
}
}
}
[1] PowerShell considers the current-user, current-host profile the profile of interest, which is why $PROFILE's string value contains that value. Note that in order to decorate a [string] instance with note properties, Add-Member alone is not enough; you must use the following idiom: $decoratedString = $string | Add-Member -PassThru propName propValue - see the Add-Member help topic.
In the below sample module file, is there a way to pass the myvar value while importing the module.
For example,
import-module -name .\test.psm1 -?? pass a parameter? e.g value of myvar
#test.psm1
$script:myvar = "hi"
function Show-MyVar {Write-Host $script:myvar}
function Set-MyVar ($Value) {$script:myvar = $Value}
#end test.psm1
(This snippet was copied from another question.)
This worked for me:
You can use the –ArgumentList parameter of the import-module cmdlet to pass arguments when loading a module.
You should use a param block in your module to define your parameters:
param(
[parameter(Position=0,Mandatory=$false)][boolean]$BeQuiet=$true,
[parameter(Position=1,Mandatory=$false)][string]$URL
)
Then call the import-module cmdlet like this:
import-module .\myModule.psm1 -ArgumentList $True,'http://www.microsoft.com'
As may have already noticed, you can only supply values (no names) to –ArgumentList. So you should define you parameters carefully with the position argument.
Reference
The -ArgumentList parameter of Import-Module unfortunately does not accept a [hashtable] or [psobject] or something. A list with fixed postitions is way too static for my liking so I prefer to use a single [hashtable]-argument which has to be "manually dispatched" like this:
param( [parameter(Mandatory=$false)][hashtable]$passedVariables )
# this module uses the following variables that need to be set and passed as [hashtable]:
# BeQuiet, URL, LotsaMore...
$passedVariables.GetEnumerator() |
ForEach-Object { Set-Variable -Name $_.Key -Value $_.Value }
...
The importing module or script does something like this:
...
# variables have been defined at this point
$variablesToPass = #{}
'BeQuiet,URL,LotsaMore' -split ',' |
ForEach-Object { $variablesToPass[$_] = Get-Variable $_ -ValueOnly }
Import-Module TheModule -ArgumentList $variablesToPass
The above code uses the same names in both modules but you could of course easily map the variable names of the importing script arbitrarily to the names that are used in the imported module.
I'm surprised that I didn't get the answer for this common scenario after Googling for while...
How can an environment variable in be set in PowerShell if it does not exist?
The following code defines environment variable FOO for the current process, if it doesn't exist yet.
if ($null -eq $env:FOO) { $env:FOO = 'bar' }
# If you want to treat a *nonexistent* variable the same as
# an existent one whose value is the *empty string*, you can simplify to:
if (-not $env:FOO) { $env:FOO = 'bar' }
# Alternatively:
if (-not (Test-Path env:FOO)) { $env:FOO = 'bar' }
# Or even (quietly fails if the variable already exists):
New-Item -ErrorAction Ignore env:FOO -Value bar
In PowerShell (Core) 7.1+, which has null-coalescing operators, you can simplify to:
$env:FOO ??= 'bar'
Note:
Environment variables are strings by definition. If a given environment variable is defined, but has no value, its value is the empty string ('') rather than $null. Thus, comparing to $null can be used to distinguish between an undefined environment variable and one that is defined, but has no value. However, note that assigning to environment variables in PowerShell / .NET makes no distinction between $null and '', and either value results in undefining (removing) the target environment variable; similarly, in cmd.exe set FOO= results in removal/non-definition of variable FOO, and the GUI dialog (accessible via sysdm.cpl) doesn't allow you to define a variable with an empty string either. However, the Windows API (SetEnvironmentVariable) does permit creating environment variables that contain the empty string.
On Unix-like platforms, empty-string values are allowed too, and the native, POSIX-compatible shells (e.g, bash and /bin/sh) - unlike PowerShell - also allow you to create them (e.g, export FOO=). Note that environment variable definitions and lookups are case-sensitive on Unix, unlike on Windows.
Note: If the environment variable is created on demand by the assignment above ($env:FOO = ...), it will exist for the current process and any child processes it creates only Thanks, PetSerAl.
The following was mostly contributed by Ansgar Wiechers, with a supplement by Mathias R. Jessen:
On Windows[*], if you want to define an environment variable persistently, you need to use the static SetEnvironmentVariable() method of the [System.Environment] class:
# user environment
[Environment]::SetEnvironmentVariable('FOO', 'bar', 'User')
# system environment (requires admin privileges)
[Environment]::SetEnvironmentVariable('FOO', 'bar', 'Machine')
Note that these definitions take effect in future sessions (processes), so in order to define the variable for the current process as well, run $env:FOO = 'bar' in addition, which is effectively the same as [Environment]::SetEnvironmentVariable('FOO', 'bar', 'Process').
When using [Environment]::SetEnvironmentVariable() with User or Machine, a WM_SETTINGCHANGE message is sent to other applications to notify them of the change (though few applications react to such notifications).
This doesn't apply when targeting Process (or when assigning to $env:FOO), because no other applications (processes) can see the variable anyway.
See also: Creating and Modifying Environment Variables (TechNet article).
[*] On Unix-like platforms, attempts to target the persistent scopes - User or Machine- are quietly ignored, as of .NET (Core) 7, and this non-support for defining persistent environment variables is unlikely to change, given the lack of a unified mechanism across Unix platforms.
Code
function Set-LocalEnvironmentVariable {
param (
[Parameter()]
[System.String]
$Name,
[Parameter()]
[System.String]
$Value,
[Parameter()]
[Switch]
$Append
)
if($Append.IsPresent)
{
if(Test-Path "env:$Name")
{
$Value = (Get-Item "env:$Name").Value + $Value
}
}
Set-Item env:$Name -Value "$value" | Out-Null
}
function Set-PersistentEnvironmentVariable {
param (
[Parameter()]
[System.String]
$Name,
[Parameter()]
[System.String]
$Value,
[Parameter()]
[Switch]
$Append
)
Set-LocalEnvironmentVariable -Name $Name -Value $Value -Append:$Append
if ($Append.IsPresent) {
$value = (Get-Item "env:$Name").Value
}
if ($IsWindows) {
setx "$Name" "$Value" | Out-Null
return
}
$pattern = "\s*export[ \t]+$Name=[\w]*[ \t]*>[ \t]*\/dev\/null[ \t]*;[ \t]*#[ \t]*$Name\s*"
if ($IsLinux) {
$file = "~/.bash_profile"
$content = (Get-Content "$file" -ErrorAction Ignore -Raw) + [System.String]::Empty
$content = [System.Text.RegularExpressions.Regex]::Replace($content, $pattern, [String]::Empty);
$content += [System.Environment]::NewLine + [System.Environment]::NewLine + "export $Name=$Value > /dev/null ; # $Name"
Set-Content "$file" -Value $content -Force
return
}
if ($IsMacOS) {
$file = "~/.zprofile"
$content = (Get-Content "$file" -ErrorAction Ignore -Raw) + [System.String]::Empty
$content = [System.Text.RegularExpressions.Regex]::Replace($content, $pattern, [String]::Empty);
$content += [System.Environment]::NewLine + [System.Environment]::NewLine + "export $Name=$Value > /dev/null ; # $Name"
Set-Content "$file" -Value $content -Force
return
}
throw "Invalid platform."
}
function Set-PersistentEnvironmentVariable
Set a variable/value in actual process and system. This function calls Set-LocalEnvironmentVariable function to set process scope variables and perform task for set variables in machine scope.
On Windows you can use:
[Environment]::SetEnvironmentVariable with machine scope, user or machine don't work on Linux or MacOS
setx command
On Linux we can add export VARIABLE_NAME=Variable value to file ~/.bash_profile. For a new bash terminal the process execute these instructions located in ~/.bash_profile.
On MacOS similiar to Linux but if you have zsh terminal the file is .zprofile, if the default terminal is bash, the file is .bash_profile. In my function code we need to add detection of default terminal if you wish. I assume that default terminal is zsh.
function Set-LocalEnvironmentVariable
Set a variable/value in actual process. Using Drive env:.
Examples
#Set "Jo" value to variable "NameX", this value is accesible in current process and subprocesses, this value is accessible in new opened terminal.
Set-PersistentEnvironmentVariable -Name "NameX" -Value "Jo"; Write-Host $env:NameX
#Append value "ma" to current value of variable "NameX", this value is accesible in current process and subprocesses, this value is accessible in new opened terminal.
Set-PersistentEnvironmentVariable -Name "NameX" -Value "ma" -Append; Write-Host $env:NameX
#Set ".JomaProfile" value to variable "ProfileX", this value is accesible in current process/subprocess.
Set-LocalEnvironmentVariable "ProfileX" ".JomaProfile"; Write-Host $env:ProfileX
Output
Windows 10
Ubuntu WSL
References
Check About Environment Variables
Shell initialization files
ZSH: .zprofile, .zshrc, .zlogin - What goes where?
You can use the following code to set an environment variable in PowerShell if it doesn't exist:
if (!(Test-Path -Path Env:VAR_NAME)) {
New-Item -Path Env:VAR_NAME -Value "VAR_VALUE"
}
Replace VAR_NAME with the name of the environment variable and VAR_VALUE with the desired value.
I am trying to read values from a text file and keep them as variables to use in my script.
This config file contains strings, ints, booleans and an array that can contain strings, ints and booleans.
When I declare the variables outright, I have no problems. My script functions as expected. However when I am reading in the config file and trying to create variables based on that, I only get the variables declared as strings.
This creates my config file in the format I would like.
Function Create-Config() {
If (!(Test-Path config.txt)) {
$currentlocation=Get-Location
$parentfolder=(get-item $currentlocation).parent.FullName
New-Item config.txt -ItemType "file"
Add-Content config.txt "SERVER_NAME=MyServer"
Add-Content config.txt "SERVER_LOCATION=$currentlocation"
Add-Content config.txt "BACKUP_LOCATION=$parentfolder\backup"
Add-Content config.txt "CRAFTBUKKIT=craftbukkit.jar"
Add-Content config.txt "JAVA_FLAGS=-Xmx1G"
Add-Content config.txt "CRAFTBUKKIT_OPTIONS=-o True -p 1337"
Add-Content config.txt "TEST_DEPENDENCIES=True"
Add-Content config.txt "DELETE_LOG=True"
Add-Content config.txt "TAKE_BACKUP=True"
Add-Content config.txt "RESTART_PAUSE=5"
}
}
However, either I need to change how I create my config file, or change how I import those variables. I want the config file to be as simple as possible. I am using this code to import the values:
Function Load-Variables() {
Get-Content config.txt | Foreach-Object {
$var = $_.Split('=')
New-Variable -Name $var[0] -Scope Script -Value $var[1]
}
}
As you can see, I don't explicitly set the variable, since the variables from the config are different types (booleans, int, array, strings). However, PowerShell imports these all as strings. I can import all variables individually (which I may have to do) but I'm still feeling like I will be stuck on the array.
If I declare the array using this command:
New-Variable -Name CRAFTBUKKIT_OPTIONS -Option Constant -Value ([array]#('-o',$true,'-p',25565))
I get exactly what I want, but I need to import it from the config file instead of declaring the variable in my script. The java program is a bit finicky, so I cannot just import that value as a string, or it will not get passed properly and those options get ignored. I've found the only way it works is to have it as an array (as defined above). I also want to note that there could be many more config file options presented than in my example.
I am not sure what is the better approach - importing the variables to be declared correctly (what I would like to do), or assuming they cannot be imported as anything other than a string and then parsing that string into the proper variable types after.
I have tried declaring the variables before hand and using the Set-Variable command to set the values, but that doesn't work. It very much seems like my variables are being imported with Get-Content as strings from the start instead of the correct types.
Full script is here:
https://gist.github.com/TnTBass/4692f2a00fade7887ce4
Any help?
$types = #{
SERVER_NAME = {$args[0]}
SERVER_LOCATION = {$args[0]}
BACKUP_LOCATION = {$args[0]}
CRAFTBUKKIT = {$args[0]}
JAVA_FLAGS = {$args[0]}
CRAFTBUKKIT_OPTIONS = { ($args[0].split(' ')[0] -as [string]),
([bool]::Parse($args[0].split(' ')[1])),
($args[0].split(' ')[2] -as [string]),
($args[0].split(' ')[3] -as [int]) }
TEST_DEPENDENCIES = {[bool]::Parse($args[0])}
DELETE_LOG = {[bool]::Parse($args[0])}
TAKE_BACKUP = {[bool]::Parse($args[0])}
RESTART_PAUSE = {$args[0] -as [int]}
}
$ht = [ordered]#{}
gc config.txt |
foreach {
$parts = $_.split('=').trim()
$ht[$parts[0]] = &$types[$parts[0]] $parts[1]
}
New-object PSObject -Property $ht
SERVER_NAME : MyServer
SERVER_LOCATION : C:\testfiles
BACKUP_LOCATION : C:\\backup
CRAFTBUKKIT : craftbukkit.jar
JAVA_FLAGS : -Xmx1G
CRAFTBUKKIT_OPTIONS : {-o, True, -p, 1337}
TEST_DEPENDENCIES : True
DELETE_LOG : True
TAKE_BACKUP : True
RESTART_PAUSE : 5
The $types hash table uses parameter names from your configuration file for the keys, and script blocks that define the typing and data transformation that needs to be done on the string value for that parameter you're reading from the file. As each line is read in from the file, this part of the script:
$parts = $_.split('=').trim()
$ht[$parts[0]] = &$types[$parts[0]] $parts[1]
Splits it at the '=', then looks up the script block for that parameter and invokes it using the value as it's argument. The results are stored in a hash table ($ht), and then that's used to create an object. You can omit the object creation and just use the hash table to pass your config values if that's more appropriate for your application.
You might need to add some error trapping to test the input data and/or resulting values for production work. but I think the hash table of script blocks is a pretty clean way doing to present the typing and transformation, and should be fairly intuitive to read and easy to maintain in the script if you need to make changes. The first 5 parameters are string parameters, and are just returned as-is, but you can explicit cast them as [string] in the script block just for clarity.
Of course Powershell handles the variable values as strings. That's because it cannot tell string "1337" apart from integer 1337 without some extra help. In order to specify the data type, you need some metadata. There is an format just for that - XML. Now, you don't need to create an XML file by yourself. There are cmdlets Import-CliXML and Export-CliXML that manage Powershell object serialization.
One could for example save the configuration settings in a hash table and serialize it like so,
$cfgSettings = #{
"currentlocation" = "my current location";
"parentfolder" = "my backup location";
"SERVER_NAME" = "MyServer";
"SERVER_LOCATION" = $currentlocation;
"BACKUP_LOCATION" = "$parentfolder\backup";
"CRAFTBUKKIT" = "craftbukkit.jar";
"JAVA_FLAGS" = "-Xmx1G";
"CRAFTBUKKIT_OPTIONS" = "-o True -p 1337";
"TEST_DEPENDENCIES" = $true;
"DELETE_LOG" = $true;
"TAKE_BACKUP" = $true;
"RESTART_PAUSE" = 5
}
Export-Clixml -Path myConf.xml -InputObject $cfgSettings
The file contains serialized hashtable with data types. For example, DELETE_LOG is a boolean, RESTART_PAUSE an int and so on:
<En>
<S N="Key">DELETE_LOG</S>
<B N="Value">true</B>
</En>
<En>
<S N="Key">RESTART_PAUSE</S>
<I32 N="Value">5</I32>
</En>
<En>
<S N="Key">JAVA_FLAGS</S>
<S N="Value">-Xmx1G</S>
</En>
Repopulating and accessing the settings hashtable is not hard either:
$config = Import-CliXML myConf.xml
$config["DELETE_LOG"] # NB! Case sensitive, "delete_log" is different a key!
True
Edit
As per how to create the array, here is a sample that uses deserialized data.
Split the options and serialize the values:
$config = #{
"CRAFTBUKKIT_OPTION1" = "-o" ;
"CRAFTBUKKIT_OPTION2" = $true ;
"CRAFTBUKKIT_OPTION3" = "-p" ;
"CRAFTBUKKIT_OPTION4" = 1337 }
Export-Clixml -InputObject $config -Path .\temp\conf.xml
Deserialize the values and create an array out of them:
$config2 = Import-Clixml C:\temp\conf.xml
$array = #(
$config2["CRAFTBUKKIT_OPTION1"],
$config2["CRAFTBUKKIT_OPTION2"],
$config2["CRAFTBUKKIT_OPTION3"],
$config2["CRAFTBUKKIT_OPTION4"])
Print the array contents with type info:
$array | % { $("{0} {1}" -f $_, ($_.gettype().name)) }
# Output
-o String
True Boolean
-p String
1337 Int32
Using in PowerShell, how can I check if an application is locking a file?
I like to check which process/application is using the file, so that I can close it.
You can do this with the SysInternals tool handle.exe. Try something like this:
PS> $handleOut = handle
PS> foreach ($line in $handleOut) {
if ($line -match '\S+\spid:') {
$exe = $line
}
elseif ($line -match 'C:\\Windows\\Fonts\\segoeui\.ttf') {
"$exe - $line"
}
}
MSASCui.exe pid: 5608 ACME\hillr - 568: File (---) C:\Windows\Fonts\segoeui.ttf
...
This could help you: Use PowerShell to find out which process locks a file. It parses the System.Diagnostics.ProcessModuleCollection Modules property of each process and it looks for the file path of the locked file:
$lockedFile="C:\Windows\System32\wshtcpip.dll"
Get-Process | foreach{$processVar = $_;$_.Modules | foreach{if($_.FileName -eq $lockedFile){$processVar.Name + " PID:" + $processVar.id}}}
You should be able to use the openfiles command from either the regular command line or from PowerShell.
The openfiles built-in tool can be used for file shares or for local files. For local files, you must turn on the tool and restart the machine (again, just for first time use). I believe the command to turn this feature on is:
openfiles /local on
For example (works on Windows Vista x64):
openfiles /query | find "chrome.exe"
That successfully returns file handles associated with Chrome. You can also pass in a file name to see the process currently accessing that file.
You can find a solution using Sysinternal's Handle utility.
I had to modify the code (slightly) to work with PowerShell 2.0:
#/* http://jdhitsolutions.com/blog/powershell/3744/friday-fun-find-file-locking-process-with-powershell/ */
Function Get-LockingProcess {
[cmdletbinding()]
Param(
[Parameter(Position=0, Mandatory=$True,
HelpMessage="What is the path or filename? You can enter a partial name without wildcards")]
[Alias("name")]
[ValidateNotNullorEmpty()]
[string]$Path
)
# Define the path to Handle.exe
# //$Handle = "G:\Sysinternals\handle.exe"
$Handle = "C:\tmp\handle.exe"
# //[regex]$matchPattern = "(?<Name>\w+\.\w+)\s+pid:\s+(?<PID>\b(\d+)\b)\s+type:\s+(?<Type>\w+)\s+\w+:\s+(?<Path>.*)"
# //[regex]$matchPattern = "(?<Name>\w+\.\w+)\s+pid:\s+(?<PID>\d+)\s+type:\s+(?<Type>\w+)\s+\w+:\s+(?<Path>.*)"
# (?m) for multiline matching.
# It must be . (not \.) for user group.
[regex]$matchPattern = "(?m)^(?<Name>\w+\.\w+)\s+pid:\s+(?<PID>\d+)\s+type:\s+(?<Type>\w+)\s+(?<User>.+)\s+\w+:\s+(?<Path>.*)$"
# skip processing banner
$data = &$handle -u $path -nobanner
# join output for multi-line matching
$data = $data -join "`n"
$MyMatches = $matchPattern.Matches( $data )
# //if ($MyMatches.value) {
if ($MyMatches.count) {
$MyMatches | foreach {
[pscustomobject]#{
FullName = $_.groups["Name"].value
Name = $_.groups["Name"].value.split(".")[0]
ID = $_.groups["PID"].value
Type = $_.groups["Type"].value
User = $_.groups["User"].value.trim()
Path = $_.groups["Path"].value
toString = "pid: $($_.groups["PID"].value), user: $($_.groups["User"].value), image: $($_.groups["Name"].value)"
} #hashtable
} #foreach
} #if data
else {
Write-Warning "No matching handles found"
}
} #end function
Example:
PS C:\tmp> . .\Get-LockingProcess.ps1
PS C:\tmp> Get-LockingProcess C:\tmp\foo.txt
Name Value
---- -----
ID 2140
FullName WINWORD.EXE
toString pid: 2140, user: J17\Administrator, image: WINWORD.EXE
Path C:\tmp\foo.txt
Type File
User J17\Administrator
Name WINWORD
PS C:\tmp>
I was looking for a solution to this as well and hit some hiccups.
Didn't want to use an external app
Open Files requires the local ON attribute which meant systems had to be configured to use it before execution.
After extensive searching I found.
https://github.com/pldmgg/misc-powershell/blob/master/MyFunctions/PowerShellCore_Compatible/Get-FileLockProcess.ps1
Thanks to Paul DiMaggio
This seems to be pure powershell and .net / C#
You can find for your path on handle.exe.
I've used PowerShell but you can do with another command line tool.
With administrative privileges:
handle.exe -a | Select-String "<INSERT_PATH_PART>" -context 0,100
Down the lines and search for "Thread: ...", you should see there the name of the process using your path.
Posted a PowerShell module in PsGallery to discover & kill processes that have open handles to a file or folder.
It exposes functions to: 1) find the locking process, and 2) kill the locking process.
The module automatically downloads handle.exe on first usage.
Find-LockingProcess()
Retrieves process information that has a file handle open to the specified path.
Example: Find-LockingProcess -Path $Env:LOCALAPPDATA
Example: Find-LockingProcess -Path $Env:LOCALAPPDATA | Get-Process
Stop-LockingProcess()
Kills all processes that have a file handle open to the specified path.
Example: Stop-LockingProcess -Path $Home\Documents
PsGallery Link: https://www.powershellgallery.com/packages/LockingProcessKiller
To install run:
Install-Module -Name LockingProcessKiller
I like what the command prompt (CMD) has, and it can be used in PowerShell as well:
tasklist /m <dllName>
Just note that you can't enter the full path of the DLL file. Just the name is good enough.
I've seen a nice solution at Locked file detection that uses only PowerShell and .NET framework classes:
function TestFileLock {
## Attempts to open a file and trap the resulting error if the file is already open/locked
param ([string]$filePath )
$filelocked = $false
$fileInfo = New-Object System.IO.FileInfo $filePath
trap {
Set-Variable -name filelocked -value $true -scope 1
continue
}
$fileStream = $fileInfo.Open( [System.IO.FileMode]::OpenOrCreate,[System.IO.FileAccess]::ReadWrite, [System.IO.FileShare]::None )
if ($fileStream) {
$fileStream.Close()
}
$obj = New-Object Object
$obj | Add-Member Noteproperty FilePath -value $filePath
$obj | Add-Member Noteproperty IsLocked -value $filelocked
$obj
}
If you modify the above function slightly like below it will return True or False
(you will need to execute with full admin rights)
e.g. Usage:
PS> TestFileLock "c:\pagefile.sys"
function TestFileLock {
## Attempts to open a file and trap the resulting error if the file is already open/locked
param ([string]$filePath )
$filelocked = $false
$fileInfo = New-Object System.IO.FileInfo $filePath
trap {
Set-Variable -name Filelocked -value $true -scope 1
continue
}
$fileStream = $fileInfo.Open( [System.IO.FileMode]::OpenOrCreate, [System.IO.FileAccess]::ReadWrite, [System.IO.FileShare]::None )
if ($fileStream) {
$fileStream.Close()
}
$filelocked
}