In my CSV file I have "SharePoint Site" column and a few other columns. I'm trying to split the ID from "SharePoint Site" columns and put it to the new column call "SharePoint ID" but not sure how to do it so I'll be really appreciated If I can get any help or suggestion.
$downloadFile = Import-Csv "C:\AuditLogSearch\New folder\Modified-Audit-Log-Records.csv"
(($downloadFile -split "/") -split "_") | Select-Object -Index 5
CSV file
SharePoint Site
Include:[https://companyname-my.sharepoint.com/personal/elksn7_nam_corp_kl_com]
Include:[https://companyname-my.sharepoint.com/personal/tzksn_nam_corp_kl_com]
Include:[https://companyname.sharepoint.com/sites/msteams_c578f2/Shared%20Documents/Forms/AllItems.aspx?id=%2Fsites%2Fmsteams%5Fc578f2%2FShared%20Documents%2FBittner%2DWilfong%20%2D%20Litigation%20Hold%2FWork%20History&viewid=b3e993a1%2De0dc%2D4d33%2D8220%2D5dd778853184]
Include:[https://companyname.sharepoint.com/sites/msteams_c578f2/Shared%20Documents/Forms/AllItems.aspx?id=%2Fsites%2Fmsteams%5Fc578f2%2FShared%20Documents%2FBittner%2DWilfong%20%2D%20Litigation%20Hold%2FWork%20History&viewid=b3e993a1%2De0dc%2D4d33%2D8220%2D5dd778853184]
Include:[All]
After spliting this will show it under new Column call "SharePoint ID"
SharePoint ID
2. elksn
3. tzksn
4. msteams_c578f2
5. msteams_c578f2
6. All
Try this:
# Import csv into an array
$Sites = (Import-Csv C:\temp\Modified-Audit-Log-Records.csv).'SharePoint Site'
# Create Export variable
$Export = #()
# ForEach loop that goes through the SharePoint sites one at a time
ForEach($Site in $Sites){
# Clean up the input to leave only the hyperlink
$Site = $Site.replace('Include:[','')
$Site = $Site.replace(']','')
# Split the hyperlink at the fifth slash (Split uses binary, so 0 would be the first slash)
$SiteID = $Site.split('/')[4]
# The 'SharePoint Site' Include:[All] entry will be empty after doing the split, because it has no 4th slash.
# This If statement will detect if the $Site is 'All' and set the $SiteID as that.
if($Site -eq 'All'){
$SiteID = $Site
}
# Create variable to export Site ID
$SiteExport = #()
$SiteExport = [pscustomobject]#{
'SharePoint ID' = $SiteID
}
# Add each SiteExport to the Export array
$Export += $SiteExport
}
# Write out the export
$Export
A concise solution that appends a Sharepoint ID column to the existing columns by way of a calculated property:
Import-Csv 'C:\AuditLogSearch\New folder\Modified-Audit-Log-Records.csv' |
Select-Object *, #{
Name = 'SharePoint ID'
Expression = {
$tokens = $_.'SharePoint Site' -split '[][/]'
if ($tokens.Count -eq 3) { $tokens[1] } # matches 'Include:[All]'
else { $tokens[5] -replace '_nam_corp_kl_com$' }
}
}
Note:
To see all resulting column values, pipe the above to Format-List.
To re-export the results to a CSV file, pipe to Export-Csv
You have 3 distinct patterns you are trying to extract data from. I believe regex would be an appropriate tool.
If you are wanting the new csv to just have the single ID column.
$file = "C:\AuditLogSearch\New folder\Modified-Audit-Log-Records.csv"
$IdList = switch -Regex -File ($file){
'Include:.+(?=/(\w+?)_)(?<=personal)' {$matches.1}
'Include:(?=\[(\w+)\])' {$matches.1}
'Include:.+(?=/(\w+?)/)(?<=sites)' {$matches.1}
}
$IdList |
ConvertFrom-Csv -Header "Sharepoint ID" |
Export-Csv -Path $newfile -NoTypeInformation
If you want to add a column to your existing CSV
$file = "C:\AuditLogSearch\New folder\Modified-Audit-Log-Records.csv"
$properties = ‘*’,#{
Name = 'Sharepoint ID'
Expression = {
switch -Regex ($_.'sharepoint Site'){
'Include:.+(?=/(\w+?)_)(?<=personal)' {$matches.1}
'Include:(?=\[(\w+)\])' {$matches.1}
'Include:.+(?=/(\w+?)/)(?<=sites)' {$matches.1}
}
}
}
Import-Csv -Path $file |
Select-Object $properties |
Export-Csv -Path $newfile -NoTypeInformation
Regex details
.+ Match any amount of any character
(?=...) Positive look ahead
(...) Capture group
\w+ Match one or more word characters
? Lazy quantifier
(?<=...) Positive look behind
This would require more testing to see if it works well, but with the input we have it works, the main concept is to use System.Uri to parse the strings. From what I'm seeing, the segment you are looking for is always the third one [2] and depending on the previous segments, perform a split on _ or trim the trailing / or leave the string as is if IsAbsoluteUri is $false.
$csv = Import-Csv path/to/test.csv
$result = foreach($line in $csv)
{
$uri = [uri]($line.'SharePoint Site' -replace '^Include:\[|]$')
$id = switch($uri)
{
{-not $_.IsAbsoluteUri} {
$_
break
}
{ $_.Segments[1] -eq 'personal/' } {
$_.Segments[2].Split('_')[0]
break
}
{ $_.Segments[1] -eq 'sites/' } {
$_.Segments[2].TrimEnd('/')
}
}
[pscustomobject]#{
'SharePoint Site' = $line.'SharePoint Site'
'SharePoint ID' = $id
}
}
$result | Format-List
Related
I've been on this for few days now, I'm trying to parse multiple text files containing data like this :
[Cluster1]
GatewayIp=xx.xxx.xxx.xx
IpAddress=xx.xxx.xxx.x
MTU=0000
NetMask=xxx.xxx.xxx.0
Port=xxx
Protocol=xxxx/xxxxx
Sessions=xxxxxx
Bands=xxx, xxx, x
Binding=xxxxx
GroupNumber=x
InitQueue=xxxxxx
Interface=xxxxxx
Process=xxx
SupportsCar=No
SupportsCom=Yes
SupportsPos=Yes
SupportsXvd=No
[Cluster2]
GatewayIp=xx.xxx.xxx.xx
IpAddress=xx.xxx.xxx.x
MTU=0000
NetMask=xxx.xxx.xxx.0
Port=xxx
Protocol=xxxx/xxxxx
Sessions=xxxxxx
Bands=xxx, xxx, x
Binding=xxxxx
GroupNumber=x
InitQueue=xxxxxx
Interface=xxxxxx
Process=xxx
SupportsCar=No
SupportsCom=No
SupportsPos=No
SupportsXvd=Yes
I want to extract the "IpAddress" in the section where thoses lines are present :
SupportsCom=Yes
SupportsPos=Yes
The thing is, I've tried using -context to grab the nth line after the section name "[Cluster1]", but that section name is different from file to file ...
$ip = Select-String -Path "$location" -Pattern "\[Cluster1\]" -Context 0,2 |
Foreach-Object {$_.Context.PostContext}
I've tried using the Precontext to grab the Nth line before SupportsCom=Yes, but the line position of "IpAddress=" is different from file to file ...
$ip = Select-String -Path "$location" -Pattern " SupportsCom=Yes" -Context 14,0 |
Foreach-Object { $_.Line,$_.Context.PreContext[0].Trim()}
Is there a way to grab the section containing "SupportsCom=Yes" knowing that the section is delimited by a blank line above and below, then search in that section a string that contains "IpAddress=" then return the value afterthe "=" ?
Ok, since you are not allowed to use a module (perhaps later..), this should get you what you want
# change the extension in the Filter to match that of your files
$configFiles = Get-ChildItem -Path 'X:\somewhere' -Filter '*.ini' -File
$result = foreach ($file in $configFiles) {
# initialize these variables to $null
$IpAddress = $supportsCom = $supportsPos = $null
# loop through the file line by line and try regex matches on them
switch -Regex -File $file {
'^\[([^\]]+)]' {
# did we get all wanted entries from the previous cluster?
if ($IpAddress -and $supportsCom -and $supportsPos) {
if ($supportsCom -eq 'Yes' -and $supportsPos -eq 'Yes') {
# just output the IpAddress so it gets collected in variable $result
$IpAddress
}
# reset the variables to $null
$IpAddress = $supportsCom = $supportsPos = $null
}
# start a new cluster
$cluster = $matches[1]
}
'^\s+IpAddress\s*=\s*(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})' { $IpAddress = $matches[1]}
'^\s+SupportsCom\s*=\s*(Yes|No)' { $supportsCom = $matches[1] }
'^\s+SupportsPos\s*=\s*(Yes|No)' { $supportsPos = $matches[1]}
}
}
# show results on screen
$result
# or save as text file
$result | Set-Content -Path 'X:\somewhere\IpAddresses.txt'
Updated answer:
If you don't care about the name of the section(s), where IpAddress is found in, you can use this "one-liner" (broken into multiple lines for readability):
$ip = (Get-Content $location -Raw) -split '\[.+?\]' |
ConvertFrom-StringData |
Where-Object { $_.SupportsCom -eq 'Yes' -and $_.SupportsPos -eq 'Yes' } |
ForEach-Object IpAddress
The Get-Content line reads the input file as a single multi-line string and splits it at the section headers (e. g. [Cluster1]).
ConvertFrom-StringData converts the Key = Value lines into one hashtable per section.
For each hashtable, Where-Object checks whether it contains SupportsCom=Yes and SupportsPos=Yes
ForEach-Object IpAddress is shorthand for writing Select-Object -ExpandProperty IpAddress which gives you the actual value of IpAddress instead of an object that contains a member named IpAddress.
Note that $ip can be either a single string value or an array of strings (if there are multiple matching sections).
Original answer:
You could also write a general-purpose function that converts INI sections into objects. This enables you to use the pipeline with a simple Where-Object statement to get the data you are interested in.
Generic function to output INI sections as objects, one by one:
Function Read-IniObjects {
[CmdletBinding()]
param (
[Parameter(Mandatory, ValueFromPipeline)] [String] $Path
)
process {
$section = #{} # A hashtable that stores all properties of the currently processed INI section.
# Read file line by line and match each line by the given regular expressions.
switch -File $Path -RegEx {
'^\s*\[(.+?)\]\s*$' { # [SECTION]
# Output all data of previous section
if( $section.Count ) { [PSCustomObject] $section }
# Create new section data
$section = [ordered] #{ IniSection = $matches[ 1 ] }
}
'^\s*(.+?)\s*=\s*(.+?)\s*$' { # KEY = VALUE
$key, $value = $matches[ 1..2 ]
$section.$key = $value
}
}
# Output all data of last section
if( $section.Count ) { [PSCustomObject] $section }
}
}
Usage:
$ip = Read-IniObjects 'test.ini' |
Where-Object { $_.SupportsCom -eq 'Yes' -and $_.SupportsPos -eq 'Yes' } |
ForEach-Object IpAddress
Notes:
The INI file is parsed using the switch statement, which can directly use a file as input. This is much faster than using a Get-Content loop.
As we are using -RegEx parameter, the switch statement matches each line of the file to the given regular expressions, entering the case branches only if the current line matches.
Get detailed explanation about how the RegEx's work:
match lines like [Section] -> RegEx101
match lines like Key = Value -> RegEx101
ForEach-Object IpAddress is shorthand for writing Select-Object -ExpandProperty IpAddress which gives you the actual value of IpAddress instead of an object that contains a member named IpAddress.
Note that $ip can be either a single string value or an array of strings (if there are multiple matching sections).
ProcessName UserName PSComputerName
AnyDesk NT-AUTORITÄT\SYSTEM localhost
csrss dc-01
ctfmon SAD\Administrator rdscb-01
SAD\Administrator srv-01
Remove the second and last row here
Based on your comments, if $data is read from a CSV file and contains custom objects, you can do the following:
$data | where { $_.PsObject.Properties.Value -notcontains $null -and $_.PsObject.Properties.Value -notcontains '' }
This will apply to every property and won't require supplying named properties.
There are more elegant ways, but, here is a kind of ugly answer, to illustrate this...
$Data = #"
"ProcessName","UserName","PSComputerName"
"AnyDesk","NT-AUTORITÄT\SYSTEM","localhost"
"csrss","","dc-01"
"ctfmon","SAD\Administrator","rdscb-01"
"","SAD\Administrator","srv-01"
"# | Out-File -FilePath 'D:\Temp\ProcData.csv'
$headers = (
(Get-Content -Path 'D:\Temp\ProcData.csv') -replace '"','' |
select -First 1
) -split ','
$data = Import-Csv -Path 'D:\Temp\ProcData.csv'
$colCnt = $headers.count
$lineNum = 0
:newline
foreach ($line in $data)
{
$lineNum++
for ($i = 0; $i -lt $colCnt; $i++)
{
# test to see if contents of a cell is empty
if (-not $line.$($headers[$i]))
{
Write-Warning -Message "$($lineNum): $($headers[$i]) is blank"
continue newline
}
}
"$($lineNum): OK"
# Perform other actions with good data
}
<#
# Results
1: OK
WARNING: 2: UserName is blank
3: OK
WARNING: 4: ProcessName is blank
#>
I'm receiving an automated report from a system that cannot be modified as a CSV. I am using PowerShell to split the CSV into multiple files and parse out the specific data needed. The CSV contains columns that may contain no data, 1 value, or multiple values that are comma separated within the CSV file itself.
Example(UPDATED FOR CLARITY):
"Group","Members"
"Event","362403"
"Risk","324542, 340668, 292196"
"Approval","AA-334454, 344366, 323570, 322827, 360225, 358850, 345935"
"ITS","345935, 358850"
"Services",""
I want the data to have one entry per line like this (UPDATED FOR CLARITY):
"Group","Members"
"Event","362403"
"Risk","324542"
"Risk","340668"
"Risk","292196"
#etc.
I've tried splitting the data and I just get an unknown number of columns at the end.
I tried a foreach loop, but can't seem to get it right (pseudocode below):
Import-CSV $Groups
ForEach ($line in $Groups){
If($_.'Members'.count -gt 1, add-content "$_.Group,$_.Members[2]",)}
I appreciate any help you can provide. I've searched all the stackexchange posts and used Google but haven't been able to find something that addresses this exact issue.
Import-Csv .\input.csv | ForEach-Object {
ForEach ($Member in ($_.Members -Split ',')) {
[PSCustomObject]#{Group = $_.Group; Member = $Member.Trim()}
}
} | Export-Csv .\output.csv -NoTypeInformation
# Get the raw text contents
$CsvContents = Get-Content "\path\to\file.csv"
# Convert it to a table object
$CsvData = ConvertFrom-CSV -InputObject $CsvContents
# Iterate through the records in the table
ForEach ($Record in $CsvData) {
# Create array from the members values at commas & trim whitespace
$Record.Members -Split "," | % {
$MemberCount = $_.Trim()
# Check if the count is greater than 1
if($MemberCount -gt 1) {
# Create our output string
$OutputString = "$($Record.Group), $MemberCount"
# Write our output string to a file
Add-Content -Path "\path\to\output.txt" -Value $OutputString
}
}
}
This should work, you had the right idea but I think you may have been encountering some syntax issues. Let me know if you have questions :)
Revised the code as per your updated question,
$List = Import-Csv "\path\to\input.csv"
foreach ($row in $List) {
$Group = $row.Group
$Members = $row.Members -split ","
# Process for each value in Members
foreach ($MemberValue in $Members) {
# PS v3 and above
$Group + "," + $MemberValue | Export-Csv "\path\to\output.csv" -NoTypeInformation -Append
# PS v2
# $Group + "," + $MemberValue | Out-File "\path\to\output.csv" -Append
}
}
I have a large text file (*.txt) in the following format:
; KEY 123456
; Any Company LLC
; 123 Main St, Anytown, USA
SEC1 = xxxxxxxxxxxxxxxxxxxxx
SEC2 = xxxxxxxxxxxxxxxxxxxxx
SEC3 = xxxxxxxxxxxxxxxxxxxxx
SEC4 = xxxxxxxxxxxxxxxxxxxxx
SEC5 = xxxxxxxxxxxxxxxxxxxxx
SEC6 = xxxxxxxxxxxxxxxxxxxxx
This is repeated for about 350 - 400 keys. These are HASP keys and the SEC codes associated with them. I am trying to parse this file into a CSV file with KEY and SEC1 - SEC6 as the headers, with the rows being filled in. This is the format I am trying to get to:
KEY,SEC1,SEC2,SEC3,SEC4,SEC5,SEC6
123456,xxxxxxxxxx,xxxxxxxxxxx,xxxxxxxxxx,xxxxxxxxxx,xxxxxxxxxx,xxxxxxxxxx
456789,xxxxxxxxxx,xxxxxxxxxx,xxxxxxxxxx,xxxxxxxxxx,xxxxxxxxxx,xxxxxxxxxx
I have been able to get a script to export to a CSV with only one key in the text file (my test file), but when I try to run it on the full list, it only exports the last key and sec codes.
$keysheet = '.\AllKeys.txt'
$holdarr = #{}
Get-Content $keysheet | ForEach-Object {
if ($_ -match "KEY") {
$key, $value = $_.TrimStart("; ") -split " "
$holdarr[$key] = $value }
elseif ($_ -match "SEC") {
$key, $value = $_ -split " = "
$holdarr[$key] = $value }
}
$hash = New-Object PSObject -Property $holdarr
$hash | Export-Csv -Path '.\allsec.csv' -NoTypeInformation
When I run it on the full list, it also adds a couple of extra columns with what looks like properties instead of values.
Any help to get this to work would be appreciated.
Thanks.
Here's the approach I suggest:
$output = switch -Regex -File './AllKeys.txt' {
'^; KEY (?<key>\d+)' {
if ($o) {
[pscustomobject]$o
}
$o = #{
KEY = $Matches['key']
}
}
'^(?<sec>SEC.*?)\s' {
$o[$Matches['sec']] = ($_ | ConvertFrom-StringData)[$Matches['sec']]
}
default {
Write-Warning -Message "No match found: $_"
}
}
# catch the last object
$output += [pscustomobject]$o
$output | Export-Csv -Path './some.csv' -NoTypeInformation
This would be one approach.
& {
$entry = $null
switch -Regex -File '.\AllKeys.txt' {
"KEY" {
if ($entry ) {
[PSCustomObject]$entry
}
$entry = #{}
$key, $value = $_.TrimStart("; ") -split " "
$entry[$key] = [int]$value
}
"SEC" {
$key, $value = $_ -split " = "
$entry[$key] = $value
}
}
[PSCustomObject]$entry
} | sort KEY | select KEY,SEC1,SEC2,SEC3,SEC4,SEC5,SEC6 |
Export-Csv -Path '.\allsec.csv' -NoTypeInformation
Lets leverage the strength of ConvertFrom-StringData which
Converts a string containing one or more key and value pairs to a hash table.
So what we will do is
Split into blocks of text
edit the "; Key" line
Remove an blank lines or semicolon lines.
Pass to ConvertFrom-StringData to create a hashtable
Convert that to a PowerShell object
$path = "c:\temp\keys.txt"
# Split the file into its key/sec collections. Drop any black entries created in the split
(Get-Content -Raw $path) -split ";\s+KEY\s+" | Where-Object{-not [string]::IsNullOrWhiteSpace($_)} | ForEach-Object{
# Split the block into lines again
$lines = $_ -split "`r`n" | Where-Object{$_ -notmatch "^;" -and -not [string]::IsNullOrWhiteSpace($_)}
# Edit the first line so we have a full block of key=value pairs.
$lines[0] = "key=$($lines[0])"
# Use ConvertFrom-StringData to do the leg work after we join the lines back as a single string.
[pscustomobject](($lines -join "`r`n") | ConvertFrom-StringData)
} |
# Cannot guarentee column order so we force it with this select statement.
Select-Object KEY,SEC1,SEC2,SEC3,SEC4,SEC5,SEC6
Use Export-CSV to your hearts content now.
I have an Excel File which has an unknown number of records in it, and these 3 columns:
Variable Name, Store Number, Email Address
I use this in QlikView to import data for certain stores and then create a separate report for each store in the list. I then need to email each report to each individual store (store number will be in the report file name).
So in PowerShell I would like to read the Excel File and set variables for each store:
$Store1 = The Store Number in Row 2 of the Excel File
$Store1Email = The Store Email in Row 2 of the Excel File
$Store2 = The Store Number in Row 3 of the Excel File
$Store2Email = The Store Email in Row 3 of the Excel File
etc. for each Storein the file (can be any number of stores).
Please note the "Variable Name" in the excel file must be ignored (that is for QLikView) and the PowerShell variables must be named as per my above examples, each time incrementing the number.
Check out my PowerShell Excel Module on Github. You can also grab it from the PowerShell Gallery.
$stores = Import-Excel C:\Temp\store.xlsx
$stores[2].Name
$stores[2].StoreNumber
$stores[2].EmailAddress
''
'All stores'
'----------'
$stores
Ok, first off if you are going to be working with actual .XLS or .XLSX or .XLSM files I would highly suggest using the Import-XLS function from the TechNet gallery (found here).
After that, just reference the object it imports to send the emails instead of making objects for each store. Such as:
$StoreList = Import-XLS <path to Excel file>
GC <report folder> | %{
$Current = $_
$Store = $StoreList|?{$_.StoreNumber -match $Current.BaseName}|Select -ExpandProperty StoreNumber
$Email = $StoreList|?{$_.StoreNumber -match $Current.BaseName}|Select -ExpandProperty StoreEmail
<code to send $Current to $Email>
}
My preference is to Save-As the Excel file to a '.csv' type. The comma separated value can easily be imported into PowerShell.
$csvFile = Import-Csv -Path c:\scripts\temp\excelFile.csv
#now the entire Excel '.csv' file is saved into csvFile variable
$csvFile |Get-Member
#look at the properties
Remember to study the greats so your PowerShell script looks great. Jeffery Snover, Jason Hicks, Don Jones, Ashley McGlone, and anyone on their friends list ha ha
The above answers usually work, but I just had a project with excel datasheets that caused some problems.
edit: Here's a much more advanced version that will pull it into an object, can handle blank and duplicate column names, and can skip human information at the beginning of the worksheet by looking for something in the header row. I've also included some example usages
Your example:
$file = New-Module -AsCustomObject -ScriptBlock $file_template
$file.from_excel("c:\folder\file.xls")
$Store1 = $file.data[0]."Store Number" #first row, column named "Store Number"
$Store1Email = $file.data[0]."Store Email" #first row, column named "Store Email"
foreach ($row in $file.data)
{
write-host "Store: $($row."Store Number")"
write-host "Store Email: $($row."Store Email")"
}
Example 1:
# Simplest example
$file = New-Module -AsCustomObject -ScriptBlock $file_template
$file.from_excel("c:\folder\file.xls")
$file.data[0]
Example 2:
#advanced usage
$file = New-Module -AsCustomObject -ScriptBlock $file_template
$file.header_contains="First Name" # if included it will drop everything before the first line that contains this, useful if there are instructions for humans in the worksheet
$file.indexer_column = 5 # Default: 1 (first column); This column's contents will set the minimum number of rows, use if there are blank rows in your file but more data after them
$file.worksheet_index = "January" # Default: 1; can be a sheet index or sheet name
$file.filename = "c:\folder\file.xls" #can set this independently, useful for validation and troubleshooting
$file.from_excel() #This is where we actually pull from excel
$collected = $file.data|ogv -pass thru #this is a neat way to select some rows you want
$file.headers.count # It stores an array of the headers here, useful for troubleshooting and advanced logic
Excel Reader pseudoclass
$file_template = {
# -- universal --
$filename = ""
$delimiter = ","
$headers = #()
$data = #()
# -- used by some functions --
# we put these here to allow assigning them before calling functions, which improves readability and auditability
$header_contains=""
$indexer_column=1
$worksheet_index=1
function from_excel(
$filename=$this.filename,
$worksheet_index=$this.worksheet_index
)`
{
$this.filename = $filename
$this.worksheet_index = $worksheet_index
$data_by_row = $this.from_excel_as_csv() # $data_by_row = $file.from_excel_as_csv($test_file)
$data_by_row = $data_by_row -split"`n"
#if ($this.headers.count -lt 1) {$this.headers = $data_by_row[0] -split $this.delimiter} #this would let us set headers elsewhere which is more flexible but less adaptive, Because columns change unpredicably we need something more adaptive
$temp_headers = $data_by_row[0] -split $this.delimiter
$temp_headers = $this.fix_blank_headers($temp_headers)
$this.headers = $this.dedupe_headers($temp_headers)
$this.data = $data_by_row|select -Skip 1|ConvertFrom-Csv -Header $this.headers -Delimiter $this.delimiter
}
function from_csv($filename=$this.filename)`
{
$this.filename = $filename
$this.headers = (Get-Content $this.filename -ReadCount 1|select -first 1) -split $this.delimiter
$this.data = Get-Content $this.filename|ConvertFrom-Csv -Delimiter $this.delimiter
}
function from_excel_as_csv(
$filename=$this.filename,
$worksheet_index=$this.worksheet_index
)`
{
$this.filename = $filename
$this.worksheet_index = $worksheet_index
#set up excel
Write-Host "Importing from excel, this may take a little while..."
$excel = New-Object -ComObject Excel.Application
$excel.DisplayAlerts = $false
$excel.Visible = $false
$workbook = $excel.workbooks.open($this.filename)
$worksheet = $workbook.Worksheets.Item($this.worksheet_index)
#import from excel
try{
$data_by_row = ""
$indexed_column = $worksheet.columns.item($this.indexer_column).value2 #we use this to work around some files having headers with blank space
$minimum_rows = (($indexed_column -join "◘").TrimEnd("◘") -split "◘").count # This Strips the million or so extra blank rows excel appends to get a realistic column length.
[bool]$header_found = 0
$i=1
do `
{
$row = $worksheet.rows.item($i).value2
$row_as_text = $row -join "◘" # ◘ (alt+8) is just a placeholder that's unlikely to show up in the text
$row_as_text = $row_as_text -replace $this.delimiter,"."
$row_as_text = $row_as_text.TrimEnd("◘")
$row_as_text = $row_as_text -replace "◘",$this.delimiter
if ($row_as_text -like "*$($this.header_contains)*"){[bool]$header_found=1}
if ($header_found) {$data_by_row+="$row_as_text`n"}
$i++
}
while ( ($row_as_text.Length -gt 1) -or ($i -lt $minimum_rows) )
}
catch {Write-Warning "ERROR Importing from excel"}
#close excel
$workbook.Close()
$excel.Quit()
write-host "Done importing from excel"
return $data_by_row
}
function dedupe_headers($headers){
$dupes = ($headers|group)|?{$_.count -gt 1}
if ($dupes.count -ge 1)
{
foreach ($dupe in $dupes)
{ #$dupe = $dupes[0]
$i=1
$new_headers = #()
foreach ($header in $headers)
{ #$header = $headers[0]
if ($header -eq $dupe.name)
{
$header = "$($header)_$($i)" # "header_#"
$i++
}
$new_headers += $header
}
}
}
else {$new_headers = $headers} # no duplicates found
return $new_headers
}
function fix_blank_headers($headers)
{
$replace_blanks_with = "_"
$new_headers = #()
foreach ($header in $headers)
{
if ($header -eq "") {$new_headers += $replace_blanks_with}
else {$new_headers += $header}
}
if ($new_headers.count -ne $header)
{
$error_json = #($headers),#($new_headers)|ConvertTo-Json -Compress
Write-Error "Error when fixing blank headers, original and new counts are different $($error_json)"
}
return $new_headers
}
<# function some_function($some_parameter){return $some_parameter} #>
Export-ModuleMember -Function * -Variable *
}
Forgive the ugliness here. I am not a programmer, so there are undoubtedly more optimized ways to do this, as well as better formatting. It will work, however, if I understand your requirements correctly.
$excelfile = import-csv "c:\myfile.csv"
$i = 1
$excelfile | ForEach-Object {
New-Variable "Store$i" $_."Store Number"
$iemail = $i.ToString() + "Email"
New-Variable "Store$iemail" $_."Email Address"
$i ++
}
edit: as per the reply to your original post, this works with a csv file. Just save it to csv first if necessary.
$excelfile = import-csv "C:\Temp\store.csv"
$i = 1 $excelfile | ForEach-Object {
$NA= $_."Name"
$SN= $_."StoreNumber"
Write-Output "row $i"
$NA
$SN
$i++ }