So my challenge today.
I have a config file (really just a txt document) that stores variables to store information passed between scripts or to be used after restarts.
I am looking for a more efficient way to read and update the file. Currently I read the file with:
Get-Content $current\Install.cfg | ForEach-Object {
Set-Variable -Name line -Value $_
$a, $b = $line.Split('=')
Set-Variable -name $a -Value $b
}
But to overwrite the contents, I recreate the file with:
ECHO OSV=$OSV >>"$ConfigLoc\tool.cfg"
ECHO OSb=$OSb >>"$ConfigLoc\tool.cfg"
ECHO cNum=$cNum >>"$ConfigLoc\tool.cfg"
ECHO cCode=$cCode >>"$ConfigLoc\tool.cfg"
ECHO Comp=$Comp >>"$ConfigLoc\tool.cfg"
Each time I have added a new saved variable, I have just hardcoded the new variable into both the original config file and the config updater.
As my next updates require an additional 30 variables to my current 15. I would like something like:
Get-Content $current\Install.cfg | ForEach-Object {
Set-Variable -Name line -Value $_
$a, $b = $line.Split('=')
ECHO $a=$$a
}
Where $$a uses the variable $a in the loop as the variable name to load the value.
Best example i can show to clarify is:
ECHO $a=$$a (in current loop)
Echo OSV=$OSV (actually appears in code as)
Not sure how to clarify this anymore, or how to achieve it with the variable title also being a variable.
If you want to create a file that has name=value parameters, here's an alternate suggestion. This is a snippet of a real script I use every day. You might modify it so it reads your .csv input and uses it instead of the hard coded values.
$Sites = ("RawSiteName|RoleName|DevUrl|SiteID|HttpPort|HttpsPort", `
"SiteName|Name of role|foo.com|1|80|443" `
) | ConvertFrom-CSV -Delimiter "|"
$site = $sites[0]
Write-Host "RawSiteName =$($site.RawSiteName)"
You might be able to use something similar to $text = Get-Content MyParameters.csv and pipe that to the ConvertFrom-CSV cmdlet. I realize it's not a direct answer to what you are doing but it will let you programmatically create a file to pass across scripts.
Thanks for the help everyone. This is the solution I am going with. Importing and exporting couldn't be simpler. If I have to manually update the XML install default I can with ease which is also amazing. I also love the fact that even if you import as $Test you can still use $original to access variables. I will be creating multiple hashtables to organize the different data I will be using going forward and just import/export it in a $config variable as the master.
$original = #{
OSV='0'
OSb='0'
cNum='00000'
cCode='0000'
Client='Unknown'
Comp='Unknown'
}
$original | Export-Clixml $Home\Desktop\sample.cfg
$Test = Import-Clixml $Home\Desktop\sample.cfg
Write $Test
Write $original.Client
In essence, you're looking for variable indirection: accessing a variable indirectly, via its name stored in another variable.
In PowerShell, Get-Variable allows you to do that, as demonstrated in the following:
# Sample variables.
$foo='fooVal'
$bar='barVal'
# List of variables to append to the config file -
# the *names* of the variables above.
$varsToAdd =
'foo',
'bar'
# Loop over the variable names and use string expansion to create <name>=<value> lines.
# Note how Get-Variable is used to retrieve each variable's value via its *name*.
$(foreach ($varName in $varsToAdd) {
"$varName=$(Get-Variable $varName -ValueOnly)"
}) >> "$ConfigLoc/tool.cfg"
With the above, the following lines are appended to the output *.cfg file:
foo=fooVal
bar=barVal
Note that you can read such a file more easily with the ConvertFrom-StringData, which outputs a hashtable with the name-value pairs from the file:
$htSettings = Get-Content -Raw "$ConfigLoc/tool.cfg" | ConvertFrom-StringData
Accessing $htSettings.foo would then return fooVal, for instance.
With a hashtable as the settings container, updating the config file becomes easier, as you can simply recreate the file with all settings and their current values:
$htSettings.GetEnumerator() |
ForEach-Object { "$($_.Key)=$($_.Value)" } > "$ConfigLoc/tool.cfg"
Note: PowerShell by default doesn't enumerate the entries of a hashtable in the pipeline, which is why .GetEnumerator() is needed.
Generally, though, this kind of manual serialization is fraught, as others have pointed out, and there are more robust - though typically less friendly - alternatives.
With your string- and line-based serialization approach, there are two things to watch out for:
All values are saved as a strings, so you have to manually reconvert to the desired data type, if necessary - and even possible, given that not all objects provide meaningful string representations.
Generally, the most robust serialization format is Export-CliXml, but note that it is not a friendly format - be careful with manual edits.
ConvertFrom-StringData will fail with duplicate names in the config file, which means you have to manually ensure that you create no duplicate entries when you append to the file - if you use the above approach of recreating the file from a hashtable every time, however, you're safe.
Related
I'm trying to find an efficient way to read the value of a string variable in a PowerShell .ps1 file and then update the same variable/value in another .ps1 file. In my specific case, I would update a variable for the version # on script one and then I would want to run a script to update it on multiple other .ps1 files. For example:
1_script.ps1 - Script I want to read variable from
$global:scriptVersion = "v1.1"
2_script.ps1 - script I would want to update variable on (Should update to v1.1)
$global:scriptVersion = "v1.0"
I would want to update 2_script.ps1 to set the variable to "v1.1" as read from 1_script.ps1. My current method is using get-content with a regex to find a line starting with my variable, then doing a bunch of replaces to get the portion of the string I want. This does work, but it seems like there is probably a better way I am missing or didn't get working correctly in my tests.
My Modified Regex Solution Based on Answer by #mklement0 :
I slightly modified #mklement0 's solution because dot-sourcing the first script was causing it to run
$file1 = ".\1_script.ps1"
$file2 = ".\2_script.ps1"
$fileversion = (Get-Content $file1 | Where-Object {$_ -match '(?m)(?<=^\s*\$global:scriptVersion\s*=\s*")[^"]+'}).Split("=")[1].Trim().Replace('"','')
(Get-Content -Raw $file2) -replace '(?m)(?<=^\s*\$global:scriptVersion\s*=\s*")[^"]+',$fileversion | Set-Content $file2 -NoNewLine
Generally, the most robust way to parse PowerShell code is to use the language parser. However, reconstructing source code, with modifications after parsing, may situationally be hampered by the parser not reporting the details of intra-line whitespace - see this answer for an example and a discussion.[1]
Pragmatically speaking, using a regex-based -replace solution is probably good enough in your simple case (note that the value to update is assumed to be enclosed in "..." - but matching could be made more flexible to support '...' quoting too):
# Dot-source the first script in order to obtain the new value.
# Note: This invariably executes *all* top-level code in the script.
. .\1_script.ps1
# Outputs to the display.
# Append
# | Set-Content -Encoding utf8 2_script.ps1
# to save back to the input file.
(Get-Content -Raw 2_script.ps1) -replace '(?m)(?<=^\s*\$global:scriptVersion\s*=\s*")[^"]+', $global:scriptVersion
For an explanation of the regex and the ability to experiment with it, see this regex101.com page.
[1] Syntactic elements are reported in terms of line and column position, and columns are character-based, meaning that spaces and tabs are treated the same, so that a difference of, say, 3 character positions can represent 3 spaces, 3 tabs, or any mix of it - the parser won't tell you. However, if your approach allows keeping the source code as a whole while only removing and splicing in certain elements, that won't be a problem, as shown in iRon's helpful answer.
To compliment the helpful answer from #mklement0. In case your do go for the PowerShell abstract syntax tree (AST) class, you might use the Extent.StartOffset/Extent.EndOffset properties to reconstruct your script:
Using NameSpace System.Management.Automation.Language
$global:scriptVersion = 'v1.1' # . .\Script1.ps1
$Script2 = { # = Get-Content -Raw .\Script2.ps1
[CmdletBinding()]param()
begin {
$global:scriptVersion = "v1.0"
}
process {
$_
}
end {}
}.ToString()
$Ast = [Parser]::ParseInput($Script2, [ref]$null, [ref]$null)
$Extent = $Ast.Find(
{
$args[0] -is [AssignmentStatementAst] -and
$args[0].Left.VariablePath.UserPath -eq 'global:scriptVersion' -and
$args[0].Operator -eq 'Equals'
}, $true
).Right.Extent
-Join (
$Script2.SubString(0, $Extent.StartOffset),
$global:scriptVersion,
$Script2.SubString($Extent.EndOffset)
) # |Set-Content .\Script2.ps1
I have a Commands.csv file like:
| Command |
| -----------------------------------------------|
|(Get-FileHash C:\Users\UserA\Desktop\File1).Hash|
|(Get-FileHash C:\Users\UserA\Desktop\File2).Hash|
|(Get-FileHash C:\Users\UserA\Desktop\File3).Hash|
Header name is "Command"
My idea is to:
Use ForEach ($line in Get-Content C:\Users\UserA\Desktop\Commands.csv ) {echo $line}
Execute $line one by one via powershell.exe, then output a result to a new .csv file - "result.csv"
Can you give me some directions and suggestions to implement this idea? Thanks!
Important:
Only use the technique below with input files you either fully control or implicitly trust to not contain malicious commands.
To execute arbitrary PowerShell statements stored in strings, you can use Invoke-Expression, but note that it should typically be avoided, as there are usually better alternatives - see this answer.
There are advanced techniques that let you analyze the statements before executing them and/or let you use a separate runspace with a restrictive language mode that limits what kinds of statements are allowed to execute, but that is beyond the scope of this answer.
Given that your input file is a .csv file with a Commands column, import it with Import-Csv and access the .Commands property on the resulting objects.
Use Get-Content only if your input file is a plain-text file without a header row, in which case the extension should really be .txt. (If it has a header row but there's only one column, you could get away with Get-Content Commands.csv | Select-Object -Skip 1 | ...). If that is the case, use $_ instead of $_.Commands below.
To also use the CSV format for the output file, all commands must produce objects of the same type or at least with the same set of properties. The sample commands in your question output strings (the value of the .Hash property), which cannot meaningfully be passed to Export-Csv directly, so a [pscustomobject] wrapper with a Result property is used, which will result in a CSV file with a single column named Result.
Import-Csv Commands.csv |
ForEach-Object {
[pscustomobject] #{
# !! SEE CAVEAT AT THE TOP.
Result = Invoke-Expression $_.Commands
}
} |
Export-Csv -NoTypeInformation Results.csv
Is there a way to get only the locally declared variables in a powershell script?
In this snippit, I would want it to return only myVar1, myVar2, anotherVar:
$myVar1 = "myVar1"
$myVar2 = "myVar2"
$anotherVar = "anotherVar"
Get-Variable -Scope Script
But it instead returns a ton of other local script variables.
The problem I'm trying to solve, and maybe you can suggest another way, is that I have many Powershell scripts that have a bunch of misc variable constants declared at the top.
I want to export them all to disk (xml) for import later.
So to call Get-Variable bla* | Export-Clixml vars.xml, I need to know all of the variable names.
So is there a way I can like do
$allVars = {
$myVar1 = "alex"
$myVar2 = "iscool"
$anotherVar = "thisisanotherVar"
}
Get-Variable allVars | Export-Clixml "C:\TEMP\AllVars.xml"
And then later Import-Clixml .\AllVars.xml | %{ Set-Variable $_.Name $_.Value } ?
So that the rest of the script could still use $myVar1 etc without major changes to what is already written?
The issue is there are more variables that are accessible in that scope beyond the ones you already declared. One thing you could do is get the list of variables before you declare yours. Get another copy of all the variables and compare the list to just get yours.
$before = Get-Variable -Scope Local
$r = "Stuff"
$after = Get-Variable -Scope Local
# Get the differences
Compare-Object -Reference $current -Difference $more -Property Name -PassThru
The above should spit out names and simple values for your variables. If need be you should be able to easily send that down the pipe to Export-CliXML. If your variables are complicated you might need to change the -depth for more complicated objects.
Caveat: If you are changing some default variable values the above code currently would omit them since it is just looking for new names.
Also not sure if you can import them exactly in the same means as they were exported. This is largely dependent on your data types. For simple variables this would be just fine
I need to know all of the variable names.
The only other way that I am aware of (I never really considered this) would be to change all of the variable to have a prefix like my_ so then you could just have one line for export.
Get-Variable my_* | Export-Clixml vars.xml
I am trying to automate Active Directory installation on Windows Server 2008 using windows powershell. I created a text file with .tmpl extension and added:
[DCINSTALL]
ReplicaOrNewDomain=_ReplicaOrNewDomain__
Then I created an answer file in a text format:
[DCINSTALL]
ReplicaOrNewDomain=$env:ReplicaOrNewDomain
Now I want to be able to write a script in PowerShell which will use the template file to get the value of variable ReplicaOrNewDomain from environment and replace $env:ReplicaOrNewDomain by that value in the text file so that I can use that answer file for AD installation.
You have a few options to do this. One is Environment.ExpandEnvironmentVariables. This uses a %variable% syntax (instead of $env:variable), so it would be simpler if you only want to substitute environment variables:
gc input.tmpl | foreach { [Environment]::ExpandEnvironmentVariables($_) } | sc out.ini
A more complete expansion of PowerShell expressions can be achieve via ExpandString. This is more useful if you want to insert actual PowerShell expressions into the template:
gc input.tmpl | foreach { $ExecutionContext.InvokeCommand.ExpandString($_) } | sc out.ini
A third option would be something like a customized templating scheme that uses Invoke-Expression, which I implemented here.
You can do that with a simple replacement like this:
$f = 'C:\path\to\your.txt'
(Get-Content $f -Raw) -replace '\$env:ReplicaOrNewDomain', $env:ReplicaOrNewDomain |
Set-Content $f
or like this:
$f = 'C:\path\to\your.txt'
(Get-Content $f -Raw).Replace('$env:ReplicaOrNewDomain', $env:ReplicaOrNewDomain) |
Set-Content $f
Note that when using the -replace operator you need to escape the $ (because otherwise it'd have the special meaning "end of string"). When using the Replace() method you just need to use single quotes to prevent expansion of the variable in the search string.
However, why the intermediate step of replacing the template parameter _ReplicaOrNewDomain__ with a different template parameter $env:ReplicaOrNewDomain? You would make your life easier if you just kept the former and replaced that with the value of the environment variable ReplicaOrNewDomain.
One thing that I like to do with my template files is something like this.
[DCINSTALL]
ReplicaOrNewDomain={0}
OtherVariable={1}
Then in my code I can use the format operator -f to make the changes.
$pathtofile = "C:\temp\test.txt"
(Get-Content $pathtofile -Raw) -f $env:ReplicaOrNewDomain, "FooBar" | Set-Content $pathtofile
It can help if you have multiple things that you need to update at once. Update your file with as many place holders as you need. You can use the same one multiple times if need be in the file.
[DCINSTALL]
ReplicaOrNewDomain={0}
SimilarVariable={0}
Caveat
If your actual file is supposed to contain curly braces you need to double them up to the are escaped.
You can use the ExpandString function, like this:
$ExecutionContext.InvokeCommand.ExpandString($TemplVal)
(assuming $TemplVal has the template string).
I have a powershell script for which I expect to pass quite few arguments in the command line. Having many arguments is not a problem since it is configured as a scheduled task but I'd like to make things easier for support people so that if ever they need to run the script from command line they have less things to type.
So, I am considering the option to have the arguments in a text file, either using Java-style properties file with key/value pairs in each line, or a simple XML file that would have one element per argument with a name element and a value element.
arg1=value1
arg2=value2
I'm interested with the views of PowerShell experts on the two options and also if this is the right thing to do.
Thanks
If you want to use the ini file approach, you can easily parse the ini data like so:
$ini = ConvertFrom-StringData (Get-Content .\args.ini -raw)
$ini.arg1
The one downside to this approach is that all the arg types are string this works fine for strings and even numbers AFAICT. Where it falls down is with [switch] args. Passing True or $true doesn't have the desired effect.
Another approach is to put the args in a .ps1 file and execute it to get back a hashtable with all the args that you can then splat e.g:
-- args.ps1 --
#{
ComputerName = 'localhost'
Name = 'foo'
ThrottleLimit = 50
EnableNetworkAccess = $true
}
$cmdArgs = .\args.ps1
$s = New-PSSession #cmdArgs
To extend Keith's answer slightly, it's possible to set actual variables based on the values from the config file e.g.
$ini = ConvertFrom-StringData (Get-Content .\args.ini -raw)
$ini.keys | foreach-object { set-variable -name $_ -value $ini.Item($_) }
So a file containing
a=1
b=2
c=3
When processed with above code should lead to three variables being created:
$a = 1, $b = 2, $c = 3
I expect this would work for simple string values, but i'm fairly sure it would not be 'smart' enough to convert, say myArray = 1,2,3,4,5 into an actual array.