I got a Script that works, but since we have coworker which have ö,ü,ä in their names the csv resolves them into ? (Example: Hörnlima = H?rnlima). Because of this it doesn't give me back any SamAccountname and the list isn't correct anymore. How can I correct that?
Script:
Import-Csv D:\Files\PowerShell\Test\4ME\DisplaynameToSamAccountName\Displaynames.txt | ForEach {
Get-ADUser -Filter "DisplayName -eq '$($_.DisplayName)'" -Properties Name, SamAccountName |
Select Name,SamAccountName
} | Export-CSV -path D:\Files\PowerShell\Test\4ME\DisplaynameToSamAccountName\Accountnames.csv -NoTypeInformation
Any ideas appreciated.
tl;dr:
Use, e.g., Export-Csv -Encoding utf8 ... to save your file with UTF-8 character encoding, which ensures that accented characters such as ö are preserved.
In Windows PowerShell, Export-Csv regrettably defaults to ASCII encoding, which means that any characters outside the US-ASCII range - notably accented characters such as ö - are transliterated to literal ?.
That is, such characters are lost, because they cannot be represented in ASCII encoding.
In PowerShell [Core] v6+, all cmdlets, including Export-Csv, now thankfully default to BOM-less UTF-8 encoding.
As for the behavior when you append to a preexisting CSV file with the -Append switch without specifying -Encoding, see this answer.
Therefore, especially in Windows PowerShell, use the -Encoding parameter to specify the desired character encoding:
-Encoding utf8 is advisable, because it is capable of encoding all Unicode characters.
In Windows PowerShell, the resulting file will invariably have a BOM.
In PowerShell [Core] v6+, it will be BOM-less, which is generally better for cross-platform compatibility, but you can alternatively use -Encoding utf8BOM to use a BOM.
-Encoding Unicode (UTF-16LE) encoding is another option, but results in larger files (most characters are encodes by 2 bytes). This encoding always results in a BOM.
-Encoding Default (Windows PowerShell) or
-Encoding (Get-Culture).TextInfo.ANSICodePage (PowerShell [Core] v6+) on Windows uses your system's active ANSI code page to create a BOM-less file.
This legacy encoding is best avoided, however, for multiple reasons:
Many modern applications assume UTF-8 encoding in the absence of a BOM.
Even those that read the file as ANSI-encoded may interpret a file differently if the host system happens to have a different active ANSI page.
Since the active ANSI code page is (for Western cultures) a fixed, single-byte encoding, only 256 characters can be represented, which is only a small subset of all Unicode characters.
Note that when PowerShell reads a file that is BOM-less, including source code, the behavior differs between the two editions:
In Windows PowerShell, Default is assumed, i.e. the system's active ANSI code page.
Note that in recent versions of Windows 10 it is now possible to make UTF-8 the ANSI code page, but such a system-wide change can have unintended consequences - see this answer.
In PowerShell [Core] v6+, UTF-8 is assumed.
Related
Just can't get ASCII encoding get to work in PowerShell. Tried a bunch of different approaches.
Whatever I try I get an UTF8 encoded file (that is what NPP tells me):
$newLine = "Ein Test öäü"
$newLine | Out-File -FilePath "c:\temp\check.txt" -Encoding ascii
PSVersion = 5.1.14393.5066
Any hint is welcome!
ASCII is a 7-bit character set and doesn't contain any accented character, so obviously storing öäü in ASCII doesn't work. If you need UTF-8 then you need to specify encoding as utf8
$newLine = "Ein Test öäü"
$newLine | Out-File -FilePath "c:\temp\check.txt" -Encoding utf8
If you need another encoding then specify it accordingly. For example to get the ANSI code page use this
$newLine = "Ein Test öäü"
$newLine | Out-File -FilePath "c:\temp\check.txt" -Encoding default
-Encoding default will save the file in the current ANSI code page and -Encoding oem will use the current OEM code page. Just press Tab after -Encoding and PowerShell will cycle through the list of supported encodings. For encodings not in that list you can trivially deal with them using System.Text.Encoding
Note that "ANSI code page" is a misnomer and the actual encoding changes depending on each environment so it won't be reliable. For example if you change the code page manually then it won't work anymore. For a more reliable behavior you need to explicitly specify the encoding (typically Windows-1252 for Western European languages). In older PowerShell use
[IO.File]::WriteAllLines("c:\temp\check.txt", $newLine, [Text.Encoding]::GetEncoding(1252)
and in PowerShell Core you can use
$newLine | Out-File -FilePath "check2.txt" -Encoding ([Text.Encoding]::GetEncoding(1252))
See How do I change my Powershell script so that it writes out-file in ANSI - Windows-1252 encoding?
Found the solution:
$file = "c:\temp\check-ansi.txt"
$newLine = "Ein Test öÖäÄüÜ"
Remove-Item $file
[IO.File]::WriteAllLines($file, $newLine, [Text.Encoding]::GetEncoding(28591))
I used the answer to this question:
Using PowerShell to write a file in UTF-8 without the BOM
to encode a file(UCS-2) to UTF-8. The problem is that if I run the encoding twice(or more times) the Cyrillic text is broked. How to stop the encode if the file is already in UTF-8?
The code is:
$MyFile = Get-Content $MyPath
$Utf8NoBomEncoding = New-Object System.Text.UTF8Encoding $False
[System.IO.File]::WriteAllLines($MyPath, $MyFile, $Utf8NoBomEncoding)
Use:
$MyFile = Get-Content -Encoding UTF8 $MyPath
Initially, when $MyPath is UTF-16LE-encoded ("Unicode" encoding, which I assume is what you meant), PowerShell will ignore the -Encoding parameter due to the presence of a BOM in the file, which unambiguously identifies the encoding.
If your original file does not have a BOM, more work is needed.
Once you've saved $MyPath as UTF-8 without BOM, you must tell Windows PowerShell[1] that you expect UTF-8 encoding with -Encoding UTF8, as it interprets files as "ANSI"-encoded by default (encoded according to the typically single-byte code page associated with the legacy system locale).
[1] Note that the cross-platform PowerShell Core edition defaults to BOM-less UTF-8.
In PowerShell, what's the difference between Out-File and Set-Content? Or Add-Content and Out-File -append?
I've found if I use both against the same file, the text is fully mojibaked.
(A minor second question: > is an alias for Out-File, right?)
Here's a summary of what I've deduced, after a few months experience with PowerShell, and some scientific experimentation. I never found any of this in the documentation :(
[Update: Much of this now appears to be better documented.]
Read and write locking
While Out-File is running, another application can read the log file.
While Set-Content is running, other applications cannot read the log file. Thus never use Set-Content to log long running commands.
Encoding
Out-File saves in the Unicode (UTF-16LE) encoding by default (though this can be specified), whereas Set-Content defaults to ASCII (US-ASCII) in PowerShell 3+ (this may also be specified). In earlier PowerShell versions, Set-Content wrote content in the Default (ANSI) encoding.
Editor's note: PowerShell as of version 5.1 still defaults to the culture-specific Default ("ANSI") encoding, despite what the documentation claims. If ASCII were the default, non-ASCII characters such as ü would be converted to literal ?, but that is not the case: 'ü' | Set-Content tmp.txt; (Get-Content tmp.txt) -eq '?' yields $False.
PS > $null | out-file outed.txt
PS > $null | set-content set.txt
PS > md5sum *
f3b25701fe362ec84616a93a45ce9998 *outed.txt
d41d8cd98f00b204e9800998ecf8427e *set.txt
This means the defaults of two commands are incompatible, and mixing them will corrupt text, so always specify an encoding.
Formatting
As Bartek explained, Out-File saves the fancy formatting of the output, as seen in the terminal. So in a folder with two files, the command dir | out-file out.txt creates a file with 11 lines.
Whereas Set-Content saves a simpler representation. In that folder with two files, the command dir | set-content sc.txt creates a file with two lines. To emulate the output in the terminal:
PS > dir | ForEach-Object {$_.ToString()}
out.txt
sc.txt
I believe this formatting has a consequence for line breaks, but I can't describe it yet.
File creation
Set-Content doesn't reliably create an empty file when Out-File would:
In an empty folder, the command dir | out-file out.txt creates a file, while dir | set-content sc.txt does not.
Pipeline Variable
Set-Content takes the filename from the pipeline; allowing you to set a number of files' contents to some fixed value.
Out-File takes the data as from the pipeline; updating a single file's content.
Parameters
Set-Content includes the following additional parameters:
Exclude
Filter
Include
PassThru
Stream
UseTransaction
Out-File includes the following additional parameters:
Append
NoClobber
Width
For more information about what those parameters are, please refer to help; e.g. get-help out-file -parameter append.
Out-File has the behavior of overwriting the output path unless the -NoClobber and/or the -Append flag is set. Add-Content will append content if the output path already exists by default (if it can). Both will create the file if one doesn't already exist.
Another interesting difference is that Add-Content will create an ASCII encoded file by default and Out-File will create a little endian unicode encoded file by default.
> is an alias syntactic sugar for Out-File. It's Out-File with some pre-defined parameter settings.
Well, I would disagree... :)
Out-File has -Append (-NoClober is there to avoid overwriting) that will Add-Content. But this is not the same beast.
command | Add-Content will use .ToString() method on input. Out-File will use default formatting.
so:
ls | Add-Content test.txt
and
ls | Out-File test.txt
will give you totally different results.
And no, '>' is not alias, it's redirection operator (same as in other shells). And has very serious limitation... It will cut lines same way they are displayed. Out-File has -Width parameter that helps you avoid this. Also, with redirection operators you can't decide what encoding to use.
HTH
Bartek
Set-Content supports -Encoding Byte, while Out-File does not.
So when you want to write binary data or result of Text.Encoding#GetBytes() to a file, you should use Set-Content.
Wanted to add about difference on encoding:
Windows with PowerShell 5.1:
Out-File - Default encoding is utf-16le
Set-Content - Default encoding is us-ascii
Linux with PowerShell 7.1:
Out-File - Default encoding is us-ascii
Set-Content - Default encoding is us-ascii
Out-file -append or >> can actually mix two encodings in the same file. Even if the file is originally ASCII or ANSI, it will add Unicode by default to the bottom of it. Add-content will check the encoding and match it before appending. Btw, export-csv defaults to ASCII (no accents), and set-content/add-content to ANSI.
TL;DR, use Set-Content as it's more consistent over Out-File.
Set-Content behavior is the same over different powershell versions
Out-File as #JagWireZ says produces different encodings for the default settings, even on the same OS(Windows) the docs for powershell 5.1 and powershell 7.3 state that the encoding changed from unicode to utf8NoBOM
Some issues like Malformed XML arise from using Out-File, that could of course be fixed by setting the desired encoding, however it's likely to forget to set the encoding and end up with issues.
I have a file called test.txt which contains a single Chinese character, 中, in it.
This character looks like this
under hex-editor's view.
If I do get-content test.txt | Out-File test_output.txt, the content of test_output.txt is different from test.txt. Why is this hapenning?
I've tried all the encoding parameters listed here ("Unicode", "UTF7", "UTF8", "UTF32", "ASCII", "BigEndianUnicode", "Default", and "OEM"), but none of them correctly converts the Chinese character.
How can I correctly convert Chinese characters using Get-Content and Out-File?
The encoding, e4 b8 ad, looks like URLencode of 中, is this why all the encoding parameters are not compatible with this Chinese character?
I use Notepad++ and Notepad++'s hex-editor plugin as my text-editor and hex-editor, respectively.
I tried get-content test.txt -encoding UTF8 | Out-File test_output.txt -encoding UTF8
My test.txt is "e4 b8 ad 0a". And the output is "ef bb bf e4 b8 ad 0d 0a"
test.txt is in UTF-8.
Get-Content doesn't recognize UTF-8 unless with BOM. Out-File uses UTF-16 by default.
So specifying encoding for both commands is necessary
In my case, the Unicode encoding solved my problem with the Chinese characters. The file I was modifying contained a C# code on a TFS sever.
$path="test.cs"
Get-Content -Path $path -Encoding Unicode
Set-Content -Path $path -Encoding Unicode
it might help somebody else.
In PowerShell, what's the difference between Out-File and Set-Content? Or Add-Content and Out-File -append?
I've found if I use both against the same file, the text is fully mojibaked.
(A minor second question: > is an alias for Out-File, right?)
Here's a summary of what I've deduced, after a few months experience with PowerShell, and some scientific experimentation. I never found any of this in the documentation :(
[Update: Much of this now appears to be better documented.]
Read and write locking
While Out-File is running, another application can read the log file.
While Set-Content is running, other applications cannot read the log file. Thus never use Set-Content to log long running commands.
Encoding
Out-File saves in the Unicode (UTF-16LE) encoding by default (though this can be specified), whereas Set-Content defaults to ASCII (US-ASCII) in PowerShell 3+ (this may also be specified). In earlier PowerShell versions, Set-Content wrote content in the Default (ANSI) encoding.
Editor's note: PowerShell as of version 5.1 still defaults to the culture-specific Default ("ANSI") encoding, despite what the documentation claims. If ASCII were the default, non-ASCII characters such as ü would be converted to literal ?, but that is not the case: 'ü' | Set-Content tmp.txt; (Get-Content tmp.txt) -eq '?' yields $False.
PS > $null | out-file outed.txt
PS > $null | set-content set.txt
PS > md5sum *
f3b25701fe362ec84616a93a45ce9998 *outed.txt
d41d8cd98f00b204e9800998ecf8427e *set.txt
This means the defaults of two commands are incompatible, and mixing them will corrupt text, so always specify an encoding.
Formatting
As Bartek explained, Out-File saves the fancy formatting of the output, as seen in the terminal. So in a folder with two files, the command dir | out-file out.txt creates a file with 11 lines.
Whereas Set-Content saves a simpler representation. In that folder with two files, the command dir | set-content sc.txt creates a file with two lines. To emulate the output in the terminal:
PS > dir | ForEach-Object {$_.ToString()}
out.txt
sc.txt
I believe this formatting has a consequence for line breaks, but I can't describe it yet.
File creation
Set-Content doesn't reliably create an empty file when Out-File would:
In an empty folder, the command dir | out-file out.txt creates a file, while dir | set-content sc.txt does not.
Pipeline Variable
Set-Content takes the filename from the pipeline; allowing you to set a number of files' contents to some fixed value.
Out-File takes the data as from the pipeline; updating a single file's content.
Parameters
Set-Content includes the following additional parameters:
Exclude
Filter
Include
PassThru
Stream
UseTransaction
Out-File includes the following additional parameters:
Append
NoClobber
Width
For more information about what those parameters are, please refer to help; e.g. get-help out-file -parameter append.
Out-File has the behavior of overwriting the output path unless the -NoClobber and/or the -Append flag is set. Add-Content will append content if the output path already exists by default (if it can). Both will create the file if one doesn't already exist.
Another interesting difference is that Add-Content will create an ASCII encoded file by default and Out-File will create a little endian unicode encoded file by default.
> is an alias syntactic sugar for Out-File. It's Out-File with some pre-defined parameter settings.
Well, I would disagree... :)
Out-File has -Append (-NoClober is there to avoid overwriting) that will Add-Content. But this is not the same beast.
command | Add-Content will use .ToString() method on input. Out-File will use default formatting.
so:
ls | Add-Content test.txt
and
ls | Out-File test.txt
will give you totally different results.
And no, '>' is not alias, it's redirection operator (same as in other shells). And has very serious limitation... It will cut lines same way they are displayed. Out-File has -Width parameter that helps you avoid this. Also, with redirection operators you can't decide what encoding to use.
HTH
Bartek
Set-Content supports -Encoding Byte, while Out-File does not.
So when you want to write binary data or result of Text.Encoding#GetBytes() to a file, you should use Set-Content.
Wanted to add about difference on encoding:
Windows with PowerShell 5.1:
Out-File - Default encoding is utf-16le
Set-Content - Default encoding is us-ascii
Linux with PowerShell 7.1:
Out-File - Default encoding is us-ascii
Set-Content - Default encoding is us-ascii
Out-file -append or >> can actually mix two encodings in the same file. Even if the file is originally ASCII or ANSI, it will add Unicode by default to the bottom of it. Add-content will check the encoding and match it before appending. Btw, export-csv defaults to ASCII (no accents), and set-content/add-content to ANSI.
TL;DR, use Set-Content as it's more consistent over Out-File.
Set-Content behavior is the same over different powershell versions
Out-File as #JagWireZ says produces different encodings for the default settings, even on the same OS(Windows) the docs for powershell 5.1 and powershell 7.3 state that the encoding changed from unicode to utf8NoBOM
Some issues like Malformed XML arise from using Out-File, that could of course be fixed by setting the desired encoding, however it's likely to forget to set the encoding and end up with issues.