I have a folder with multiple PDFs I need to print to different printers. I've created variables for each shared printer and depending on the first 2 letters of the PDF the printing will go to the matching printer.
I'm having trouble concatenating 2 strings to form an existing variable to use it later in the printing call.
This is what I have now (all PDFs in the dir starts with 01 for now):
# SumatraPDF path
$SumatraExe = "C:\Users\Administrador.WIN-FPFTEJASDVR\AppData\Local\SumatraPDF\SumatraPDF.exe"
# PDFs to print path
$PDF = "C:\Program Files (x86)\CarrascocreditosPrueba2\CarrascocreditosPrueba2\DTE\BOL"
# Shared printers list
$01 = '\\192.168.1.70\epson'
$02 = '\\192.168.1.113\EPSON1050'
cd $PDF
While ($true) {
Get-ChildItem | Where {!$_.PsIsContainer} | Select-Object Name | %{
$Boleta = $_.Name
$CodSucursal = $Boleta.Substring(0,2)
$CodImpresora = '$' + $CodSucursal
Write-Host $CodImpresora -> This shows literal $01 on PS ISE
Write-Host $01 -> This show the shared printer path
}
Start-Sleep -Seconds 5
}
# Actual PDF printing...
#& $SumatraExe -print-to $CodImpresora $PDF
So basically I need to call an existing variable based on 2 strings. Probably this could be achieved with a Switch but that will be too extensive.
concatenating 2 strings to form an existing variable
That won't work in PowerShell, variable tokens are always treated literally.
I'd suggest you use a hashtable instead:
# Shared printers table
$Impresoras = #{
'01' = '\\192.168.1.70\epson'
'02' = '\\192.168.1.113\EPSON1050'
}
Then inside the loop:
$Boleta = $_.Name
$CodSucursal = $Boleta.Substring(0,2)
$Impresora = $Impresoras[$CodSucursal]
Although the language syntax don't support variable variable names, you can resolve variables by name using either the Get-Variable cmdlet:
# Returns a PSVariable object describing the variable $01
Get-Variable '01'
# Returns the raw value currently assigned to $01
Get-Variable '01' -ValueOnly
... or by querying the Variable: PSDrive:
# Same effect as `Get-Variable 01`
Get-Item Variable:\01
While these alternatives exist, I'd strongly suggest staying clear of using them in scripts - they're slow, makes the code more complicated to read, and I don't think I've ever encountered a situation in which using a hashtable or an array wasn't ultimately easier :)
Related
I've seen the # symbol used in PowerShell to initialise arrays.
What exactly does the # symbol denote and where can I read more about it?
In PowerShell V2, # is also the Splat operator.
PS> # First use it to create a hashtable of parameters:
PS> $params = #{path = "c:\temp"; Recurse= $true}
PS> # Then use it to SPLAT the parameters - which is to say to expand a hash table
PS> # into a set of command line parameters.
PS> dir #params
PS> # That was the equivalent of:
PS> dir -Path c:\temp -Recurse:$true
PowerShell will actually treat any comma-separated list as an array:
"server1","server2"
So the # is optional in those cases. However, for associative arrays, the # is required:
#{"Key"="Value";"Key2"="Value2"}
Officially, # is the "array operator." You can read more about it in the documentation that installed along with PowerShell, or in a book like "Windows PowerShell: TFM," which I co-authored.
While the above responses provide most of the answer it is useful--even this late to the question--to provide the full answer, to wit:
Array sub-expression (see about_arrays)
Forces the value to be an array, even if a singleton or a null, e.g. $a = #(ps | where name -like 'foo')
Hash initializer (see about_hash_tables)
Initializes a hash table with key-value pairs, e.g.
$HashArguments = #{ Path = "test.txt"; Destination = "test2.txt"; WhatIf = $true }
Splatting (see about_splatting)
Let's you invoke a cmdlet with parameters from an array or a hash-table rather than the more customary individually enumerated parameters, e.g. using the hash table just above, Copy-Item #HashArguments
Here strings (see about_quoting_rules)
Let's you create strings with easily embedded quotes, typically used for multi-line strings, e.g.:
$data = #"
line one
line two
something "quoted" here
"#
Because this type of question (what does 'x' notation mean in PowerShell?) is so common here on StackOverflow as well as in many reader comments, I put together a lexicon of PowerShell punctuation, just published on Simple-Talk.com. Read all about # as well as % and # and $_ and ? and more at The Complete Guide to PowerShell Punctuation. Attached to the article is this wallchart that gives you everything on a single sheet:
You can also wrap the output of a cmdlet (or pipeline) in #() to ensure that what you get back is an array rather than a single item.
For instance, dir usually returns a list, but depending on the options, it might return a single object. If you are planning on iterating through the results with a foreach-object, you need to make sure you get a list back. Here's a contrived example:
$results = #( dir c:\autoexec.bat)
One more thing... an empty array (like to initialize a variable) is denoted #().
The Splatting Operator
To create an array, we create a variable and assign the array. Arrays are noted by the "#" symbol. Let's take the discussion above and use an array to connect to multiple remote computers:
$strComputers = #("Server1", "Server2", "Server3")<enter>
They are used for arrays and hashes.
PowerShell Tutorial 7: Accumulate, Recall, and Modify Data
Array Literals In PowerShell
I hope this helps to understand it a bit better.
You can store "values" within a key and return that value to do something.
In this case I have just provided #{a="";b="";c="";} and if not in the options i.e "keys" (a, b or c) then don't return a value
$array = #{
a = "test1";
b = "test2";
c = "test3"
}
foreach($elem in $array.GetEnumerator()){
if ($elem.key -eq "a"){
$key = $elem.key
$value = $elem.value
}
elseif ($elem.key -eq "b"){
$key = $elem.key
$value = $elem.value
}
elseif ($elem.key -eq "c"){
$key = $elem.key
$value = $elem.value
}
else{
Write-Host "No other value"
}
Write-Host "Key: " $key "Value: " $value
}
I would like to write out a hash table to a file with an array as one of the hash table items. My array item is written out, but it contains files=System.Object[]
Note - Once this works, I will want to reverse the process and read the hash table back in again.
clear-host
$resumeFile="c:\users\paul\resume.log"
$files = Get-ChildItem *.txt
$files.GetType()
write-host
$types="txt"
$in="c:\users\paul"
Remove-Item $resumeFile -ErrorAction SilentlyContinue
$resumeParms=#{}
$resumeParms['types']=$types
$resumeParms['in']=($in)
$resumeParms['files']=($files)
$resumeParms.GetEnumerator() | ForEach-Object {"{0}={1}" -f $_.Name,$_.Value} | Set-Content $resumeFile
write-host "Contents of $resumefile"
get-content $resumeFile
Results
IsPublic IsSerial Name BaseType
-------- -------- ---- --------
True True Object[] System.Array
Contents of c:\users\paul\resume.log
files=System.Object[]
types=txt
in=c:\users\paul
The immediate fix is to create your own array representation, by enumerating the elements and separating them with ,, enclosing string values in '...':
# Sample input hashtable. [ordered] preserves the entry order.
$resumeParms = [ordered] #{ foo = 42; bar = 'baz'; arr = (Get-ChildItem *.txt) }
$resumeParms.GetEnumerator() |
ForEach-Object {
"{0}={1}" -f $_.Name, (
$_.Value.ForEach({
(("'{0}'" -f ($_ -replace "'", "''")), $_)[$_.GetType().IsPrimitive]
}) -join ','
)
}
Not that this represents all non-primitive .NET types as strings, by their .ToString() representation, which may or may not be good enough.
The above outputs something like:
foo=42
bar='baz'
arr='C:\Users\jdoe\file1.txt','C:\Users\jdoe\file2.txt','C:\Users\jdoe\file3.txt'
See the bottom section for a variation that creates a *.psd1 file that can later be read back into a hashtable instance with Import-PowerShellDataFile.
Alternatives for saving settings / configuration data in text files:
If you don't mind taking on a dependency on a third-party module:
Consider using the PSIni module, which uses the Windows initialization file (*.ini) file format; see this answer for a usage example.
Adding support for initialization files to PowerShell itself (not present as of 7.0) is being proposed in GitHub issue #9035.
Consider using YAML as the file format; e.g., via the FXPSYaml module.
Adding support for YAML files to PowerShell itself (not present as of 7.0) is being proposed in GitHub issue #3607.
The Configuration module provides commands to write to and read from *.psd1 files, based on persisted PowerShell hashtable literals, as you would declare them in source code.
Alternatively, you could modify the output format in the code at the top to produce such files yourself, which allows you to read them back in via
Import-PowerShellDataFile, as shown in the bottom section.
As of PowerShell 7.0 there's no built-in support for writing such as representation; that is, there is no complementary Export-PowerShellDataFile cmdlet.
However, adding this ability is being proposed in GitHub issue #11300.
If creating a (mostly) plain-text file is not a must:
The solution that provides the most flexibility with respect to the data types it supports is the XML-based CLIXML format that Export-Clixml creates, as Lee Dailey suggests, whose output can later be read with Import-Clixml.
However, this format too has limitations with respect to type fidelity, as explained in this answer.
Saving a JSON representation of the data, as Lee also suggests, via ConvertTo-Json / ConvertFrom-Json, is another option, which makes for human-friendlier output than XML, but is still not as friendly as a plain-text representation; notably, all \ chars. in file paths must be escaped as \\ in JSON.
Writing a *.psd1 file that can be read with Import-PowerShellDataFile
Within the stated constraints regarding data types - in essence, anything that isn't a number or a string becomes a string - it is fairly easy to modify the code at the top to write a PowerShell hashtable-literal representation to a *.psd1 file so that it can be read back in as a [hashtable] instance via Import-PowerShellDataFile:
As noted, if you don't mind installing a module, consider the Configuration module, which has this functionality built int.
# Sample input hashtable.
$resumeParms = [ordered] #{ foo = 42; bar = 'baz'; arr = (Get-ChildItem *.txt) }
# Create a hashtable-literal representation and save it to file settings.psd1
#"
#{
$(
($resumeParms.GetEnumerator() |
ForEach-Object {
" {0}={1}" -f $_.Name, (
$_.Value.ForEach({
(("'{0}'" -f ($_ -replace "'", "''")), $_)[$_.GetType().IsPrimitive]
}) -join ','
)
}
) -join "`n"
)
}
"# > settings.psd1
If you read settings.psd1 with Import-PowerShellDataFile settings.psd1 later, you'll get a [hashtable] instance whose entries you an access as usual and which produces the following display output:
Name Value
---- -----
bar baz
arr {C:\Users\jdoe\file1.txt, C:\Users\jdoe\file1.txt, C:\Users\jdoe\file1.txt}
foo 42
Note how the order of entries (keys) was not preserved, because hashtable entries are inherently unordered.
On writing the *.psd1 file you can preserve the key(-creation) order by declaring the input hashtable (System.Collections.Hashtable) as [ordered], as shown above (which creates a System.Collections.Specialized.OrderedDictionary instance), but the order is, unfortunately, lost on reading the *.psd1 file.
As of PowerShell 7.0, even if you place [ordered] before the opening #{ in the *.psd1 file, Import-PowerShellDataFile quietly ignores it and creates an unordered hashtable nonetheless.
This is a problem I deal with all the time and it drives me mad. I really think that there should be a function specifically for this action... so I wrote one.
function ConvertHashTo-CSV
{
Param (
[Parameter(Mandatory=$true)]
$hashtable,
[Parameter(Mandatory=$true)]
$OutputFileLocation
)
$hastableAverage = $NULL #This will only work for hashtables where each entry is consistent. This checks for consistency.
foreach ($hashtabl in $hashtable)
{
$hastableAverage = $hastableAverage + $hashtabl.count #Counts the amount of headings.
}
$Paritycheck = $hastableAverage / $hashtable.count #Gets the average amount of headings
if ( ($parity = $Paritycheck -is [int]) -eq $False) #if the average is not an int the hashtable is not consistent
{
write-host "Error. Hashtable is inconsistent" -ForegroundColor red
Start-Sleep -Seconds 5
return
}
$HashTableHeadings = $hashtable[0].GetEnumerator().name #Get the hashtable headings
$HashTableCount = ($hashtable[0].GetEnumerator().name).count #Count the headings
$HashTableString = $null # Strange to hold the CSV
foreach ($HashTableHeading in $HashTableHeadings) #Creates the first row containing the column headings
{
$HashTableString += $HashTableHeading
$HashTableString += ", "
}
$HashTableString = $HashTableString -replace ".{2}$" #Removed the last , added by the above loop in error
$HashTableString += "`n"
foreach ($hashtabl in $hashtable) #Adds the data
{
for($i=0;$i -lt $HashTableCount;$i++)
{
$HashTableString += $hashtabl[$i]
if ($i -lt ($HashTableCount - 1))
{
$HashTableString += ", "
}
}
$HashTableString += "`n"
}
$HashTableString | Out-File -FilePath $OutputFileLocation #writes the CSV to a file
}
To use this copy the function into your script, run it, and then
ConvertHashTo-CSV -$hashtable $Hasharray -$OutputFileLocation c:\temp\data.CSV
The code is annotated but a brief explanation of what it does. Steps through the arrays and hashtables and adds them to a string adding the required formatting to make the string a CSV file, then outputs that to a file.
The main limitation of this is that the Hashtabes in the array all have to contain the same amount of fields. To get around this if a hashtable has a field that doesnt contain data ensure it contains at least a space.
More on this can be found here : https://grumpy.tech/powershell-convert-hashtable-to-csv/
I am importing a CSV file with two records per line, "Name" and "Path".
$softwareList = Import-Csv C:\Scripts\NEW_INSTALLER\softwareList.csv
$count = 0..($softwareList.count -1)
foreach($i in $count){
Write-Host $softwareList[$i].Name,$softwareList[$i].Path
}
What I am trying to do is dynamically assign the Name and Path of each record to a WPFCheckbox variable based on the $i variable. The names for these checkboxes are named something such as WPFCheckbox0, WPFCheckbox1, WPFCheckbox2 and so on. These objects have two properties I planned on using, "Command" to store the $SoftwareList[$i].path and "Content" to store the $SoftwareList[$i].Name
I cannot think of a way to properly loop through these variables and assign the properties from the CSV to the properties on their respective WPFCheckboxes.
Any suggestions would be very appreciated.
Invoke-Expression is one way, though note Mathias' commented concerns on the overall approach.
Within your foreach loop, you can do something like:
invoke-expression "`$WPFCheckbox$i`.Command = $($SoftwareList[$i].Path)"
invoke-expression "`$WPFCheckbox$i`.Content= $($SoftwareList[$i].Name)"
The back-tick ` just before the $WPFCheckBox prevents what would be an undefined variable from being immediately evaluated (before the expression is invoked), but the $I is. This gives you a string with your $WPFCheckbox1, to which you then append the property names and values. The $SoftwareList values are immediately processed into the raw string.
The Invoke-Expression then evaluates and executes the entire string as if it were a regular statement.
Here's a stand-alone code snippet to play with:
1..3 |% {
invoke-expression "`$MyVariable$_` = New-Object PSObject"
invoke-expression "`$MyVariable$_` | add-member -NotePropertyName Command -NotePropertyValue [String]::Empty"
invoke-expression "`$MyVariable$_`.Command = 'Path #$_'"
}
$MyVariable1 | Out-String
$MyVariable2 | Out-String
$MyVariable3 | Out-String
As a side note (since I can't comment yet on your original question,) creating an array just to act as iterator through the lines of the file is really inefficient. There are definitely better ways to do that.
values.ini looks like
[default]
A=1
B=2
C=3
foo.txt looks like
Now is the %A% for %a% %B% men to come to the %C% of their %c%
I want to use Powershell to search for all of the %x% values in values.ini and then replace every matching instance in foo.txt with the corresponding value, case insensitively; generating the following:
Now is the 1 for 1 2 men to come to the 3 of their 3
Assuming PowerShell version 3.0 or newer, you can use the ConvertFrom-StringData cmdlet to parse the key-value pair in your ini file, but you'll need to filter out the [default] directive:
# grab relevant lines from file
$KeyValPairs = Get-Content .\values.ini | Where {$_ -like "*=*" }
# join strings together as one big string
$KeyValPairString = $KeyValPairs -join [Environment]::NewLine
# create hashtable/dictionary from string with ConvertFrom-StringData
$Dictionary = $KeyValPairString |ConvertFrom-StringData
You can then use the [regex]::Replace() method to do a lookup against the dictionary for each match you want to replace:
Get-Content .\foo.txt |ForEach-Object {
[Regex]::Replace($_, '%(\p{L}+)%', {
param($Match)
# look term up in dictionary
return $Dictionary[$Match.Groups[1].Value]
})
}
To complement Mathias R. Jessen's excellent answer with alternative approaches that also take the later requirement change of limiting values to a specific INI-file section into account (PSv2+, except for Get-Content -Raw; in PSv2, use (Get-Content ...) -join "`n" instead.)
Using PsIni\Get-IniContent and [environment]::ExpandEnvironmentVariables():
# Translate key-value pairs from section the section of interest
# into environment variables.
# After this command, the following environment variables are defined:
# $env:A, with value 1 (cmd.exe equivalent: %A%)
# $env:B, with value 2 (cmd.exe equivalent: %B%)
# $env:C, with value 3 (cmd.exe equivalent: %C%)
$section = 'default' # Specify the INI-file section of interest.
(Get-IniContent values.ini)[$section].GetEnumerator() |
ForEach-Object { Set-Item "env:$($_.Name)" -Value $_.Value }
# Read the template string as a whole from file foo.txt, and expand the
# environment-variable references in it, using the .NET framework.
# With the sample input, this yields
# "Now is the 1 for 1 2 men to come to the 3 of their 3".
[environment]::ExpandEnvironmentVariables((Get-Content -Raw foo.txt))
The 3rd-party Get-IniContent cmdlet, which conveniently reads an INI file (*.ini) into a nested, ordered hashtable, can easily be installed with Install-Module PsIni from an elevated console (alternatively, add -Scope CurrentUser), if you have PS v5+ (or v3 or v4 with PackageManagement installed).
This solution takes advantage of the fact that the placeholders (e.g., %a%) look like cmd.exe-style environment-variable references.
Note the assumptions and caveats:
All ini-file keys / placeholder names are legal environment-variable names.
Preexisting variables may be overwritten, which can be problematic with names such as PATH.
Cross-platform caveat: on Unix-like platforms, environment-variable references are case-sensitive, so the solution won't work the same there.
Using custom INI-file parsing and [environment]::ExpandEnvironmentVariables():
If installing a module for INI-file parsing is not an option, the following solution uses a - rather complex - regular expression to extract the section of interest via the -replace operator.
$section = 'default' # Specify the INI-file section of interest.
# Get all non-empty, non-comment lines from the section using a regex.
$sectLines = (Get-Content -Raw values.ini) -replace ('(?smn)\A.*?(^|\r\n)\[' + [regex]::Escape($section) + '\]\r\n(?<sectLines>.*?)(\r\n\[.*|\Z)'), '${sectLines}' -split "`r`n" -notmatch '(^;|^\s*$)'
# Define the key-value pairs as environment variables.
$sectlines | ForEach-Object { $tokens = $_ -split '=', 2; Set-Item "env:$($tokens[0].Trim())" -Value $tokens[1].Trim() }
# Read the template string as a whole, and expand the environment-variable
# references in it, as before.
[environment]::ExpandEnvironmentVariables((Get-Content -Raw foo.txt))
I found a simpler solution using this INI script called Get-IniContent.
#read from Setup.ini
$INI = Get-IniContent .\Setup.ini
$sec="setup"
#REPLACE VARIABLES
foreach($c in Get-ChildItem -Path .\Application -Recurse -Filter *.config)
{
Write-Output $c.FullName
Write-Output $c.DirectoryName
$configFile = Get-Content $c.FullName -Raw
foreach($v in $INI[$sec].Keys)
{
$k = '%'+$v+'%'
$match = [regex]::IsMatch($configFile, $k)
if($match)
{
$configFile = $configFile -ireplace [regex]::Escape($k), $INI[$sec][$v]
}
}
Set-Content $c.FullName -Value $configFile
}
Trying to create a script that will read the contents of a directory containing a number of "paired" datasets containing customer data, for each customer there will be 2 datasets with the naming convention appearing consistently in the form: CustomerNo_DataType.csv where CustomerNo will always be numerical string value.
I've already written a crude version of this script with the customer numbers hard-coded so now I'm trying to improve on that - here's what I've got so far:
$files = Get-ChildItem "Path-to-data-files"
$files = $files.FullName
for ($i=0; $i -le $files.Count; $i++){
$thisFile = $files[$i].Split("\")
This leaves me with an array with the full pathname broken down into components so I grab the filename from the last position in the array
$thisFile = $thisFile[$thisFile.Count - 1]
...
}
I want to use the customer no to create a hashtable, so if the customer no in the filename was 12345 then I want to create a hashtable named $12345 - I'm not having any issues accessing the value, just not sure how to use it to name something.
Use Split-Path to get the file element of a path:
$file = Split-Path 'C:\path\to\some\file.txt' -Leaf
Use New-Variable if for some reason you need to define a variable name from a variable.
$customerNo = '12345'
New-Variable -Name $customerNo -Value #{}
However, I wouldn't recommend creating a bunch of dynamically named variables. It's usually a lot easier to handle if you create a "parent" hashtable for the dynamic names. You can have nested hashtables inside it if you need that:
$customerNo = '12345'
$customers = #{}
$customers[$customerNo] = #{}