My task is to merge two xml files together. Before merging these two files, I need to remove the first line of the second file. I was able to receive the needed output file by writing these two lines:
#case if both files exists - remove first line from the file
(Get-Content $JfilePath | Select-Object -Skip 1) | Set-Content $JfilePath
#mergeFiles together
Get-Content $MfilePath, $JfilePath | Set-Content $mergedFile
The issue is that I am modifying the second file by executing the first cmdlet. I would like to keep both files in original form. I dont want to also create any temporary files.
I was trying to perform the following:
Get-Content $MfilePath, (Get-Content $JfilePath | Select-Object -Skip 1) | Set-Content $mergedFile
but I received the error:
Get-Content : Cannot convert 'System.Object[]' to the type 'System.String' required by parameter 'LiteralPath'. Specified method is not supported.
Could you please help how the output file could be received without modifying these input files?
Try this:
(Get-Content $MfilePath), (Get-Content $JfilePath | Select-Object -Skip 1) | Set-Content $mergedFile
Related
I'm familiar with exporting data to csv with Powershell, but a particular vendor wants to receive the data in a file that is specific type 'CSV'. If I use the following example command, the data looks fine in the raw csv format (i.e via notepad), where you see it's comma separated ...
,,JVF1033292,test,SL,10700,6804626,745.586,43001843,Test,8/12/2020,8/14/2020,T,5584,,,JPY,0,XTKS,,,,,,0
,,JVF1033293,test,SL,3695,6805179,1362.8457,43001843,Test,8/12/2020,8/14/2020,T,3524,,,JPY,0,XTKS,,,,,,0
... but when the same file is opened in Excel all the data is in one row and therefore is failing on the vendors side. The code I'm using is below.
$Table | ConvertTo-Csv -NoTypeInformation | ForEach-Object {$_-replace "`"", ""} | select -skip 1 | out-file -filepath ("$dir_opr\LVT\Export\CTO\ForJefferies\GMO_Jefferies_Trade_Data_" + $Date+ ".csv")
If I use the below code, then it looks fine in Excel (tabbed correctly), but the raw file is also tabbed and not comma separated which the vendor has issues with.
$Table | ConvertTo-Csv -Delimiter "`t" -NoTypeInformation | ForEach-Object {$_-replace "`"", ""} | select -skip 1 | out-file -filepath ("$dir_opr\LVT\Export\CTO\ForJefferies\GMO_Jefferies_Trade_Data_" + $Date+ ".csv")
X JVF1032244 Test BY 450.0000 BYM41J3 10.00000000 43001843 Test 08/11/2020 08/13/2020 T 3.00 JPY 0 XTKS 0.00
X JVF1032245 Test BY 200.0000 BYM41J3 250.00000000 43001843 Test 08/11/2020 08/13/2020
Is it possible to create a comma separated and tabbed raw file, that also is delimited in Excel so not all in one column?
As far as I can tell with testing locally, it might be due to the default encoding that out-file uses - i.e utf8NoBOM. (See the description for the -Encoding parameter for details: https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/out-file?view=powershell-7)
Try out-file -filepath $myfile -encoding utf8 - that seems to fix it on my machine.
I have two text files that contain many duplicate lines. I would like to run a powershell statement that will output a new file with only the values NOT already in the first file. Below is an example of two files.
File1.txt
-----------
Alpha
Bravo
Charlie
File2.txt
-----------
Alpha
Echo
Foxtrot
In this case, only Echo and Foxtrot are not in the first file. So these would be the desired results.
OutputFile.txt
------------
Echo
Foxtrot
I reviewed the below link which is similar to what I want, but this does not write the results to an output file.
Remove lines from file1 that exist in file2 in Powershell
Here's one way to do it:
# Get unique values from first file
$uniqueFile1 = (Get-Content -Path .\File1.txt) | Sort-Object -Unique
# Get lines in second file that aren't in first and save to a file
Get-Content -Path .\File2.txt | Where-Object { $uniqueFile1 -notcontains $_ } | Out-File .\OutputFile.txt
Using the approach in the referenced link will work however, for every line in the original file, it will trigger the second file to be read from disk. This could be painful depending on the size of your files. I think the following approach would meet your needs.
$file1 = Get-Content .\File1.txt
$file2 = Get-Content .\File2.txt
$compareParams = #{
ReferenceObject = $file1
DifferenceObject = $file2
}
Compare-Object #compareParams |
Where-Object -Property SideIndicator -eq '=>' |
Select-Object -ExpandProperty InputObject |
Out-File -FilePath .\OutputFile.txt
This code does the following:
Reads each file into a separate variable
Creates a hashtable for the parameters of Compare-Object (see about_Splatting for more information)
Compares the two files in memory and passes the results to Out-File
Writes the contents of the pipeline to "OutputFile.txt"
If you are comfortable with the overall flow of this, and are only using this in one-off situations, the whole thing can be compressed into a one-liner.
(Compare-Object (gc .\File1.txt) (gc .\File2.txt) | ? SideIndicator -eq '=>').InputObject | Out-File .\OutputFile.txt
I am faced with a task convert 1 or more CSV files to JSON.
I have been able to do that with the following command:
Import-Csv "File1.csv" |
ConvertTo-Json -Compress |
Add-Content -Path "File1_output.JSON"
However, this only does one file at a time and you have to enter the name of the file manually.
To give you a background, say I have 2x CSV file, one called File1.csv and another File2.csv, what I would like to loop through all .csv files in the Documents folders and created a .json output file with the name appended to it.
Ideally I would like to call it as a .ps1 file or even a command line bat file as SSIS will be calling it ultimately.
Use Get-ChildItem for enumerating files, and a loop for processing the enumerated files.
Something like this should work for converting all CSVs in the current working directory:
Get-ChildItem -Filter '*.csv' | ForEach-Object {
Import-Csv $_.FullName |
ConvertTo-Json -Compress |
Set-Content ($_.Basename + '_output.json')
}
Note that this will convert each file individually. If you want to combine all CSVs into a single JSON document you could probably eliminate the loop
Get-ChildItem -Filter '*.csv' |
Import-Csv |
ConvertTo-Json -Compress |
Set-Content 'output.json'
at least as long as the CSVs have the same columns.
Otherwise you need to define the desired structure of the resulting JSON file first, and then merge the data from the CSVs in a way that conforms to the desired output format.
I need to merge CSV files and want to use following command from post
Merging multiple CSV files into one using PowerShell.
Get-ChildItem -Filter *.csv |
Select-Object -ExpandProperty FullName |
Import-Csv |
Export-Csv .\merged\merged.csv -NoTypeInformation -Append
However, only the first column of the source CSV files ends up in merged.csv.
Found the issue ...
The tool generating the csv files added as top line
sep=,
With this Excel openes the csv already correctly formatted.
After removing this line from the csv files all is working as expected with the Import-csv and Export-csv command.
Best regards,
Axel
I have got a set of txt files in a directory that I want to merge together.
The contents of all the txt files are in the same format as follows:
IPAddress Description DNSDomain
--------- ----------- ---------
{192.168.1.2} Microsoft Hyper-V Network Adapter
{192.168.1.30} Microsoft Hyper-V Network Adapter #2
I have the below code that combines all the txt files in to one txt file called all.txt.
copy *.txt all.txt
From this all.txt I can't see what lines came from what txt file. Any ideas on any bits of code that would add an extra column to the end txt file with the file name the rows come from?
As per the comments above, you've put the output of Format-Table into a text file. Note that Format-Table might be visually structured on screen, but is just lines of text. By doing that you have made it harder to work with the data.
If you just want a few properties from the results of the Get-WMIObject cmdlet, use Select-Object which (in the use given here) will effectively filter the data for just the properties you want.
Instead of writing text to a simple file, you can preserve the tabular nature of the data by writing to a structured file (i.e. CSV):
Get-WmiObject -Class Win32_NetworkAdapterConfiguration -Filter IPEnabled=TRUE -ComputerName SERVERNAMEHERE |
Select-Object PSComputerName, IPAddress, Description, DNSDomain |
Export-Csv 'C:\temp\server.csv'
Note that we were able to include the PScomputerName property in each line of data, effectively giving you the extra column of data you wanted.
So much for getting the raw data. One way you could read in all the CSV data and write it out again might look like this:
Get-ChildItem *.csv -Exclude all.csv |
Foreach-Object {Import-Csv $_} |
Export-Csv all.csv
Note that we exclude the output file in the initial cmdlet to avoid reading and writing form/to the same file endlessly.
If you don't have the luxury to collect the data again you'll need to spool the files together. Spooling files together is done with Get-Content, something like this:
Get-ChildItem *.txt -Exclude all.txt |
Foreach-Object {Get-Content $_ -Raw} |
Out-File all.txt
In your case, you wanted to suffix each line, which tricker as you need to process the files line-by-line:
$files = Get-ChildItem *.txt
foreach($file in $files) {
$lines = Get-Content $file
foreach($line in $lines) {
"$line $($file.Name)" | Out-File all.txt -Append
}
}