I'm currently outputting a long string through powershell as unicode using the below syntax (reference for doing it this way):
$string | out-file $path -encoding unicode
If I try to import this file in mongo, or another process that can't read UTF8 characters, I get an "Invalid UTF8 character detected." Is this the incorrect syntax?
Unicode is not the same encoding as Utf8. Have you tried -encoding ASCII or -encoding Utf8?
Related
Just can't get ASCII encoding get to work in PowerShell. Tried a bunch of different approaches.
Whatever I try I get an UTF8 encoded file (that is what NPP tells me):
$newLine = "Ein Test öäü"
$newLine | Out-File -FilePath "c:\temp\check.txt" -Encoding ascii
PSVersion = 5.1.14393.5066
Any hint is welcome!
ASCII is a 7-bit character set and doesn't contain any accented character, so obviously storing öäü in ASCII doesn't work. If you need UTF-8 then you need to specify encoding as utf8
$newLine = "Ein Test öäü"
$newLine | Out-File -FilePath "c:\temp\check.txt" -Encoding utf8
If you need another encoding then specify it accordingly. For example to get the ANSI code page use this
$newLine = "Ein Test öäü"
$newLine | Out-File -FilePath "c:\temp\check.txt" -Encoding default
-Encoding default will save the file in the current ANSI code page and -Encoding oem will use the current OEM code page. Just press Tab after -Encoding and PowerShell will cycle through the list of supported encodings. For encodings not in that list you can trivially deal with them using System.Text.Encoding
Note that "ANSI code page" is a misnomer and the actual encoding changes depending on each environment so it won't be reliable. For example if you change the code page manually then it won't work anymore. For a more reliable behavior you need to explicitly specify the encoding (typically Windows-1252 for Western European languages). In older PowerShell use
[IO.File]::WriteAllLines("c:\temp\check.txt", $newLine, [Text.Encoding]::GetEncoding(1252)
and in PowerShell Core you can use
$newLine | Out-File -FilePath "check2.txt" -Encoding ([Text.Encoding]::GetEncoding(1252))
See How do I change my Powershell script so that it writes out-file in ANSI - Windows-1252 encoding?
Found the solution:
$file = "c:\temp\check-ansi.txt"
$newLine = "Ein Test öÖäÄüÜ"
Remove-Item $file
[IO.File]::WriteAllLines($file, $newLine, [Text.Encoding]::GetEncoding(28591))
I used the answer to this question:
Using PowerShell to write a file in UTF-8 without the BOM
to encode a file(UCS-2) to UTF-8. The problem is that if I run the encoding twice(or more times) the Cyrillic text is broked. How to stop the encode if the file is already in UTF-8?
The code is:
$MyFile = Get-Content $MyPath
$Utf8NoBomEncoding = New-Object System.Text.UTF8Encoding $False
[System.IO.File]::WriteAllLines($MyPath, $MyFile, $Utf8NoBomEncoding)
Use:
$MyFile = Get-Content -Encoding UTF8 $MyPath
Initially, when $MyPath is UTF-16LE-encoded ("Unicode" encoding, which I assume is what you meant), PowerShell will ignore the -Encoding parameter due to the presence of a BOM in the file, which unambiguously identifies the encoding.
If your original file does not have a BOM, more work is needed.
Once you've saved $MyPath as UTF-8 without BOM, you must tell Windows PowerShell[1] that you expect UTF-8 encoding with -Encoding UTF8, as it interprets files as "ANSI"-encoded by default (encoded according to the typically single-byte code page associated with the legacy system locale).
[1] Note that the cross-platform PowerShell Core edition defaults to BOM-less UTF-8.
I want to create big5-encoded Chinese characters and save them into a txt file with powershell.
I know that in windows cmd.exe, I can easily create big5 characters with something like this:
echo 信 > testBig5.txt
However, in the above command creates Chinese characters encoded in UTF-16LE in powershell.
I use a binary editor(e.g. UltraEdit, Notepad++'s HEX-Editor plugin) to check whether the characters are encoded in big5 or not.
You may use this tool to view the Big5 encoding of a Chinese character.
=========================== edit 2017/3/31 =============================
I found something interesting:
The Big5 encoding of 信 is ab 48.
echo 信 | out-file -filepath abc.txt -encoding Default
echo 信信 | out-file -filepath abc.txt -encoding Default
echo 信信信 | out-file -filepath abc.txt -encoding Default
echo 信信信信 | out-file -filepath abc.txt -encoding Default
All of the 信s generated by the above code are ab 48.
However, as the length of the characters get longer than 4, the encoding of 信 becomes e4 bf a1, which is UTF-8 encoding.
echo 信信信信信 | out-file -filepath abc.txt -encoding Default # 信 is encoded as e4 bf a1 here.
It seems that the final encoding of the characters depends on the length of the charcters.
Now the question becomes how to generate long big5-encoded characters with powershell.
I have a file called test.txt which contains a single Chinese character, 中, in it.
This character looks like this
under hex-editor's view.
If I do get-content test.txt | Out-File test_output.txt, the content of test_output.txt is different from test.txt. Why is this hapenning?
I've tried all the encoding parameters listed here ("Unicode", "UTF7", "UTF8", "UTF32", "ASCII", "BigEndianUnicode", "Default", and "OEM"), but none of them correctly converts the Chinese character.
How can I correctly convert Chinese characters using Get-Content and Out-File?
The encoding, e4 b8 ad, looks like URLencode of 中, is this why all the encoding parameters are not compatible with this Chinese character?
I use Notepad++ and Notepad++'s hex-editor plugin as my text-editor and hex-editor, respectively.
I tried get-content test.txt -encoding UTF8 | Out-File test_output.txt -encoding UTF8
My test.txt is "e4 b8 ad 0a". And the output is "ef bb bf e4 b8 ad 0d 0a"
test.txt is in UTF-8.
Get-Content doesn't recognize UTF-8 unless with BOM. Out-File uses UTF-16 by default.
So specifying encoding for both commands is necessary
In my case, the Unicode encoding solved my problem with the Chinese characters. The file I was modifying contained a C# code on a TFS sever.
$path="test.cs"
Get-Content -Path $path -Encoding Unicode
Set-Content -Path $path -Encoding Unicode
it might help somebody else.
I have a file saved as UCS-2 Little Endian I want to change the encoding so I ran the following code:
cat tmp.log -encoding UTF8 > new.log
The resulting file is still in UCS-2 Little Endian. Is this because the pipeline is always in that format? Is there an easy way to pipe this to a new file as UTF8?
As suggested here:
Get-Content tmp.log | Out-File -Encoding UTF8 new.log
I would do it like this:
get-content tmp.log -encoding Unicode | set-content new.log -encoding UTF8
My understanding is that the -encoding option selects the encdoing that the file should be read or written in.
load content from xml file with encoding.
(Get-Content -Encoding UTF8 $fileName)
If you are reading an XML file, here's an even better way that adapts to the encoding of your XML file:
$xml = New-Object -Typename XML
$xml.load('foo.xml')
PowerShell's get-content/set-content encoding flag doesn't handle all encoding types. You may need to use IO.File, for example to load a file using Windows-1252:
$myString = [IO.File]::ReadAllText($filePath, [Text.Encoding]::GetEncoding(1252))
Text.Encoding::GetEncoding
Text.Encoding::GetEncodings