Export multiple line in CSV (Not multi-property value into CSV) - powershell

First of all I would like to clarify that my question is NOT: How to export multi-valued properties (array) into csv file.
My use case is the following: I am creating a code to audit Hyper-V infrastructure and I would like to have in one CSV cell multiple lines. For example: NIC1, NIC2 ... Disk 1, Disk 2.
I do not want to use join operator and have it on a single line.
I am almost certain that there was an article about that and that i did managed to achieve the goal, but unfortunately I am not able to find neither the article, neither the scrip in which I have used it.
Any suggestions or ideas would be highly appreciated! :)

The hint was submitted by LeroyJD in the comments.
#LeroyJD, Thank you very much, helped a lot! :)
To summarize for future reference: Yes, it is possible by using the new line backtick n to present in multiple lines. Sample code:
[PSCustomObject]#{
Value1 = 'Hello'
Value2 = "Hello `nWorld"
} | Export-Csv C:\TEMP\multiline.csv -NoTypeInformation
Invoke-Item C:\TEMP\multiline.csv
Which results into:
Thanks again to LeroyJD for the hint!

Related

Powershell: Compare filenames to a list of formats

I am not looking for a working solution but rather an idea to push me in the right direction as I was thrown a curveball I am not sure how to tackle.
Recently I wrote a script that used split command to check the first part of the file against the folder name. This was successful so now there is a new ask: check all the files against the naming matrix, problem is there are like 50+ file formats on the list.
So for example format of a document would be ID-someID-otherID-date.xls
So for example 12345678-xxxx-1234abc.xls as a format for the amount of characters to see if files have the correct amount of characters in each section to spot typos etc.
Is there any other reasonable way of tackling that besides Regex? I was thinking of using multiple splits using the hyphens but don't really have anything to reference that against other than the amount of characters required in each part.
As always, any (even vague) pointers are most welcome ;-)
Although I would use a regex (as also commented by zett42), there is indeed an other way which is using the ConvertFrom-String cmdlet with a template:
$template = #'
{[Int]Some*:12345678}-{[String]Other:xxxx}-{[DateTime]Date:2022-11-18}.xls
{[Int]Some*:87654321}-{[String]Other:yyyy}-{[DateTime]Date:18Nov2022}.xls
'#
'23565679-iRon-7Oct1963.xls' | ConvertFrom-String -TemplateContent $template
Some : 23565679
Other : iRon
Date : 10/7/1963 12:00:00 AM
RunspaceId : 3bf191e9-8077-4577-8372-e77da6d5f38d

How to trim the branch name "refs/heads/feature/branch name" in Powershell?

In Powershell, the result of Pipeline variable $(Build.SourceBranch)="refs/heads/feature/branchname".
But I need to trim the value and get the output as just "feature/branchname" and delete "refs/heads"
Note 1 : replace() function will solve the issue, but I am looking for other feasible solutions.
Note 2 : Tried with TrimStart() in powershell. But,it is giving the result as "ture/branchname" instead of "feature/branchname"
Can someone please suggest the best way of handling this?
Thanks in advance!

Using PowerShell to extract data from a large tab-separated text file, masked it and then merge the masked data back to the original file

I am newbie to Windows PowerShell and wanted to know if is it possible to use PowerShell to extract specific data from tab-delimited(.dat) file and merge it back together to the original file.
The reason behind the extraction of data is that they are sensitive data and requires masking.
Upon extraction, I would require to mask the data and after masking, would require to merge this masked data again back to its original file on their specific places.
Please provide some pointers, any kind of help would be appreciated.
Thank you in advance.
Solution
Here's a solution based on my limited understanding of your question (if you add more details I may be able to be more specific)
Code
Seems all you need to do is read all the data, modify and write it to the file, so here it is!
$Columns = 2,4 # Columns to mask out (Indexes start from 0)
cat ./lol.dat | % {
$arr = $_.split("`t")
$Columns | % {$arr[$_] = '*'*$arr[$_].length}
$arr.join("`t")
} | Out-File ./lol.dat

How to filter by column name?

I've a csv file. I'd like to filter it and keep only columns with headers beginning 'hit'. How can I do that?
Small example input:
hit1,miss1,hit2,miss2
a,0,d,0
b,0,e,0
c,0,f,0
Desired output:
hit1,hit2
a,d
b,e
c,f
I think I want the exclude command but I can't figure out the syntax
The order command will let you specify an inclusive list of column names to be included in the output:
csvfix order -fn hit1,hit2 data.csv
(I realize I'm late to the party, but maybe this will helpful to the next person.)

How to convert string to hashtable in 1 go?

This is strictly a learning experience:
I have a .CSV file that I'm using to define my deployment environments. One of the variables has to be in a Hash Table format.
Can anyone come up with a clever way to put it all in one line?
Right now I harvest them as a string from CSV, conver to array, convert array to hash table.
Simplified code:
Foreach($i in $DefaultCSV){...
$App_Fabric_Hosts_a = $i.App_Fabric_Hosts.split(",")}
$App_Fabric_Hosts_h = #{}
foreach($r in $App_Fabric_Hosts_a){$App_Fabric_Hosts_h.add($r,"22233")}
This is the best I came up with:
$d=#{};foreach($r in $DefaultCSV[$arrayposition].app_fabric_hosts.split(",")){$d.add($r,"22233")}