Illegal Character '?' a when I create the JSON using ConvertTo-JSON - powershell

I am not a powershell guy please excuse if my question is confusing.
We are creating a JSON file using ConverTo-JSON and it successfully creates the JSON file. However when I cat the contents of JSON it has '??' at the beginning of the json file but the same is not seen when I download the file/ view the file in file system.
Below is the powershell code which is used to create the JSON File:
$packageJson = #{
packageName = "ABC.DEF.GHI"
version = "1.1.1"
branchName = "somebranch"
oneOps = #{
platform = "XYZ"
component = "JNL"
}
}
$packageJson | ConvertTo-Json -depth 100 | Out-File "$packageName.json"
Above set of code creates the files successfully and when I view the file everything looks fine but when I cat the file it has leading '??' as shown below:
??{
"packageName": "ABC.DEF.GHI",
"version": "0.1.0-looper-poc0529",
"oneOps": {
"platform": "XYZ",
"component": "JNL"
},
"branchName": "somebranch"
}
Due to this I am unable to parse JSON file and it gives out following error:
com.jayway.jsonpath.InvalidJsonException: com.fasterxml.jackson.core.JsonParseException: Unexpected character ('?' (code 65533 / 0xfffd)): expected a valid value (number, String, array, object, 'true', 'false' or 'null')

Those aren't ? characters. Those are two different unprintable characters that make up a Unicode byte order mark. You see ? because that's how the debugger, text editor, OS, or font in question renders unprintable characters.
To fix this, either change the output encoding, or use a character set on the other end that understands UTF-8. The former is a simpler fix, but the latter is probably better in the long run. Eventually you'll end up with data that needs an extended character.

tl;dr
It sounds like your Java code expects a UTF-8-encoded file without BOM, so direct use of the .NET Framework is needed:
[IO.File]::WriteAllText("$PWD/$packageName.json", ($packageJson | ConvertTo-Json))
As Tom Blodget points out, BOM-less UTF-8 is mandated by the IETF's JSON standard, RFC 8259.
Unfortunately, Windows PowerShell's default output encoding for Out-File and also redirection operator > is UTF-16LE ("Unicode"), in which:
(most) characters are represented as 2-byte units.
the file starts with a special 2-byte unit (0xff 0xfe, the UTF-16LE encoding of Unicode character U+FEFF the ), the so-called (BOM byte-order mark) or Unicode signature, which serves to identify the encoding.
If target programs do not understand this encoding, they treat the BOM as data (and would subsequently misinterpret the actual data), which causes the problem you saw.
The specific symptom you saw - a complaint about character U+FFFD, which is used as the generic stand-in for an invalid character in the input - suggests that your Java code likely expects UTF-8 encoding.
Unfortunately, using Out-File -Encoding utf8 is not a solution, because PowerShell invariably writes a BOM for UTF-8 as well, which Java doesn't expect.
Workarounds:
If you can be sure that the JSON string contains **only characters in the 7-bit ASCII range** (no accented characters), you can get away with Out-File -Encoding Ascii, as TheIncorrigible1 suggests.
Otherwise, use the .NET framework directly for creating your output file with BOM-less UTF-8 encoding.
The answers to this question demonstrate solutions, one of which is shown in the "tl;dr" section at the top.
If it's an option, use the cross-platform PowerShell Core edition instead, whose default encoding is sensibly BOM-less UTF-8, for compatibility with the rest of the world.
Note that not all Windows PowerShell functionality is available in PowerShell Core, however, and vice versa, but future development efforts will focus on PowerShell Core.

A more general solution that's not specific to Out-File is to set these before you call ConvertTo-Json:
$OutputEncoding = [Console]::OutputEncoding = [Text.UTF8Encoding]::UTF8;

Related

Converting a hex string to base 64 in PowerShell

I'm trying to replicate the functionality of the following Python snippit in PowerShell:
allowed_mac_separators = [':', '-', '.']
for sep in allowed_mac_separators:
if sep in mac_address:
test = codecs.decode(mac_address.replace(sep, ''), 'hex')
b64_mac_address = codecs.encode(test, 'base64')
address = codecs.decode(b64_mac_address, 'utf-8').rstrip()
It takes a MAC address, removes the separators, converts it to hex, and then base64. (I did not write the Python function and have no control over it or how it works.)
For example, the MAC address AA:BB:CC:DD:E2:00 would be converted to AABBCCDDE200, then to b'\xaa\xbb\xcc\xdd\xe2\x00', and finally as output b'qrvM3eIA'. I tried doing something like:
$bytes = 'AABBCCDDE200' | Format-Hex
[System.BitConverter]::ToString($bytes);
but that produces MethodException: Cannot find an overload for "ToString" and the argument count: "1". and I'm not really sure what it's looking for. All the examples I've found utilizing that call only have one argument. This works:
[System.Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes('AABBCCDDE200'))
but obviously doesn't convert it to hex first and thus yields the incorrect result. Any help is appreciated.
# Remove everything except word characters from the string.
# In effect, this removes any punctuation ('-', ':', '.')
$sanitizedHexStr = 'AA:BB:CC:DD:E2:00' -replace '\W'
# Convert all hex-digit pairs in the string to an array of bytes.
$bytes = [byte[]] -split ($sanitizedHexStr -replace '..', '0x$& ')
# Get the Base64 encoding of the byte array.
[System.Convert]::ToBase64String($bytes)
For an explanation of the technique used to create the $bytes array, as well as a simpler PowerShell (Core) 7.1+ / .NET 5+ alternative (in short: [System.Convert]::FromHexString('AABBCCDDE200')), see this answer.
As for what you tried:
Format-Hex does not return an array of bytes (directly), its primary purpose is to visualize the input data in hex format for the human observer.
In general, Format-* cmdlets output objects whose sole purpose is to provide formatting instructions to PowerShell's output-formatting system - see this answer. In short: only ever use Format-* cmdlets to format data for display, never for subsequent programmatic processing.
That said, in the particular case of Format-Hex the output objects, which are of type [Microsoft.PowerShell.Commands.ByteCollection], do contain useful data, and do contain the bytes of the transcoded characters of input strings .Bytes property, as Cpt.Whale points out.
However, $bytes = ($sanitizedHexStr | Format-Hex).Bytes would not work in your case, because you'd effectively get byte values reflecting the ASCII code points of characters such as A (see below) - whereas what you need is the interpretation of these characters as hex digits.
But even in general I suggest not relying on Format-Hex for to-byte-array conversions:
On a philosophical note, as stated, the purpose of Format-* cmdlets is to produce for-display output, not data, and it's worth observing this distinction, this exception notwithstanding - the type of the output object could be considered an implementation detail.
Format-Hex converts strings to bytes based on first applying a fixed character transcoding (e.g., you couldn't get the byte representation of a .NET string as-is, based on UTF-16 code units), and that fixed transcoding differs between Windows PowerShell and PowerShell (Core):
In Windows PowerShell, the .NET string is transcoded to ASCII(!), resulting in the loss of non-ASCII-range characters - they are transcoded to literal ?
In PowerShell (Core), that problem is avoided by transcoding to UTF-8.
The System.BitConverter.ToString failed, because $bytes in your code wasn't itself a byte array ([byte[]]), only its .Bytes property value was (but didn't contain the values of interest).
That said, you're not looking to reconvert bytes to a string, you're looking to convert the bytes directly to Base64-encoding, as shown above.

Powershell string variable with UTF-8 encoding

I checked many related questions about this, but I couldn't find something that solves my problem. Basically, I want to store a UTF-8 encoded string in a variable and then use that string as a file name.
For example, I'm trying to download a YouTube video. If we print the video title, the non-English characters show up (ytd here is youtube-dl):
./ytd https://www.youtube.com/watch?v=GWYndKw_zbw -e
Output: [LEEPLAY] 시티팝 입문 City Pop MIX (Playlist)
But if I store this in a variable and print it, the Korean characters are ignored:
$vtitle= ./ytd https://www.youtube.com/watch?v=GWYndKw_zbw -e
$vtitle
Output:[LEEPLAY] City Pop MIX (Playlist)
For a comprehensive overview of how PowerShell interacts with external programs, which includes sending data to them, see this answer.
When PowerShell interprets output from external programs (such as ytd in your case), it assumes that the output uses the character encoding reflected in [Console]::OutputEncoding.
Note:
Interpreting refers to cases where PowerShell captures (e.g., $output = ...), relays (e.g., ... | Select-String ...), or redirects (e.g., ... > output.txt) the external program's output.
By contrast, printing directly to the display may not be affected, because PowerShell then isn't involved, and certain CLIs adjust their behavior when their stdout isn't redirected to print directly to the console with full Unicode support (which explains why the characters looked as expected in your console when ytd's output printed directly to it).
If the encoding reported by [Console]::OutputEncoding is not the same encoding used by the external program at hand, PowerShell misinterprets the output.
To fix that, you must (temporarily) set [Console]::OutputEncoding] to match the encoding used by the external program.
For instance, let's assume an executable foo.exe that outputs UTF-8-encoded text:
# Save the current encoding and switch to UTF-8.
$prev = [Console]::OutputEncoding
[Console]::OutputEncoding = [System.Text.UTF8Encoding]::new()
# PowerShell now interprets foo's output correctly as UTF-8-encoded.
# and $output will correctly contain CJK characters.
$output = foo https://example.org -e
# Restore the previous encoding.
[Console]::OutputEncoding = $prev
Important:
[Console]::OutputEncoding by default reflects the encoding associated with the legacy system locale's OEM code page, as reported by chcp (e.g. 437 on US-English systems).
Recent versions of Windows 10 now allow setting the system locale to code page 65001 (UTF-8) (the feature is still in beta as of Window 10 version 1909), which is great, considering that most modern command-line utilities "speak" UTF-8 - but note that making this system-wide change has far-reaching consequences - see this answer.
With the specific program at hand, youtube-dl, js2010 has discovered that capturing in a variable works without extra effort if you pass --encoding utf-16.
The reason this works is that the resulting UTF16-LE-encoded output is preceded by a BOM (Byte-Order Mark).
(Note that --encoding utf-8 does not work, because youtube-dl then does not emit a BOM.)
Windows PowerShell is capable of detecting and properly decoding UTF-16LE-encoded and UTF-8-encoded text irrespective of the effective [Console]::OutputEncoding] IF AND ONLY IF the output is preceded by a BOM.
Caveats:
This does not work in PowerShell Core (v6+, on any of the supported platforms).
Even in Windows PowerShell you'll rarely be able to take advantage of this obscure behavior, because using a BOM in stdout output is atypical (it is typically only used in files).
This works for me in the ISE. Youtube-dl is from ytdl-org.github.io. Actually the ise wouldn't be needed, but the filename will only show correctly in something like explorer.
# get title
# utf-16 has a bom, or use utf-8-sig, this program is python based
$a = .\youtube-dl -e https://www.youtube.com/watch?v=Qpy7N4oFQUQ --encoding utf-16
$a
Gacharic Spin - 赤裸ライアー教則映像(short ver.)TOMO-ZO編
You might have similar luck in vscode (or osx/linux).

How do I write UTF8 with no BOM to console (no file)?

I have a powershell script that returns some strings via Write-Output.
I would like those lines to be UTF8 with no bom. I do not want a global setting, I just want this to be effective for that particular few lines I write at that time.
This other question helped me get to a point: Using PowerShell to write a file in UTF-8 without the BOM
I took inspiration from one of the answers, and wrote the following code:
$mystr = "test 1 2 3"
$mybytes = [Text.Encoding]::UTF8.GetBytes($mystr)
$OutStream = [console]::OpenStandardOutput()
$OutStream.Write($mybytes,0,$TestBytes.Length)
$OutStream.Close()
However this code ONLY writes to stdout, and if I try to redirect it, it ignores my request. In other words, putting that code in test.ps1 and running test.ps1 >out.txt still prints to the console instead of to out.txt.
Could someone recommend how I could write this code so in case a user redirects the output of my PS to a file via >, that output is UTF8 with no BOM?
To add to Frode F.'s helpful answer:
What you were ultimately looking to achieve was to write a raw byte stream to PowerShell's success-output stream (the equivalent of stdout in traditional shells[0]
), not to the console.
The success output stream is what commands in PowerShell use to pass data to each other, including to output-redirection operator >, at which point the console isn't involved.
(Data written to the success-output stream may end up displayed in the console, namely if the stream is neither captured in a variable nor redirected elsewhere.)
However, it is not possible to send raw byte streams to PowerShell's success output stream; only objects (instances of .NET types) can be sent, because PowerShell is fundamentally object-oriented.
Even data representing a stream of bytes must be sent as a .NET object, such as a [byte[]] array.
However, redirecting a [byte[]] array directly to a file with >, does not write the array's raw bytes, because > creates a "Unicode" (UTF-16LE-encoded[1])
text representation of the array (as you would see if you printed the array to the console).
In order to encode objects as byte streams (that are often encoded text) for external sinks such as a file, you need the help of PowerShell cmdlets (e.g., Set-Content), > (the output redirection operator), or the methods of appropriate .NET types (e.g., [System.IO.File]), except in 2 special cases:
When piping to an external program, the encoding stored in preference variable $OutputEncoding is implicitly used.
When printing to the console, the encoding stored in [Console]::OutputEncoding is implicitly used; also, output from external programs is assumed to be encoded that way[2]
.
Generally, when it comes to text output, it is simpler to use the -Encoding parameter of output cmdlets such as Set-Content to let that cmdlet perform the encoding rather than trying to obtain a byte representation in a separate first step.
However, a BOM-less UTF-8 encoding cannot be selected this way in Windows PowerShell (it can in PowerShell Core), so using an explicit byte representation is an option, in combination with Set-Content -Encoding Byte[3]
; e.g.:
# Write string "hü" to a UTF-8-encoded file *without BOM*:
[Text.Encoding]::UTF8.GetBytes('hü') |
Set-Content -Encoding Byte file.txt
[0] Writing to stdout from within PowerShell, as you attempted, bypasses PowerShell's own system of output streams and prints directly to the console. (As an aside: Console.OpenStandardOutput() is designed to bypass redirections even in the context of traditional shells.)
[1] Up to PowerShell v5.0, you couldn't change the encoding used by >; in PSv5.1 and above, you can use something like $PSDefaultParameterValues['Out-File:Encoding']='UTF8' - that would still include a BOM, however. For background, see this answer of mine.
[2] There is a noteworthy asymmetry: on sending text to external programs, $OutputEncoding defaults to ASCII (7-bit only) encoding, which means that any non-ASCII characters get transliterated to literal ? chars.; by contrast, on interpreting text from external programs, the applicable [Console]::OutputEncoding defaults to the system's active legacy OEM code page, which is an 8-bit encoding. See the list of code pages supported by Windows.
[3] Of course, passing bytes through is not really an encoding; perhaps for that reason -Encoding Byte was removed from PowerShell Core, where -AsByteStream must be used instead.
Encoding is used for saving text to a file, not for writing to the console. Your redirection operator > is the one saving the content which means it decides the encoding. Redirection in Powershell uses Unicode. If you need to use another encoding, you can't use redirection.
When you are
writing to files, the redirection operators use Unicode encoding. If
the file has a different encoding, the output might not be formatted
correctly. To redirect content to non-Unicode files, use the Out-File
cmdlet with its Encoding parameter.
Source: about_redirection
Normally you would use ex. Out-File -Path test.txt -Encoding UTF8 inside your script, but it includes BOM so I'd recommend using WriteAllLines(path,contents) which uses UTF8 without BOM as default.
[System.IO.File]::WriteAllLines("c:\test.txt", $MyOutputArray)

Trouble understanding C# URL decode with Unicode character(s) in PowerShell

I'm currently working on something that requires me to pass a Base64 string to a PowerShell script. But while decoding the string back to the original I'm getting some unexpected results as I need to use UTF-7 during decoding and I don't understand why. Would someone know why?
The Mozilla documentation would suggest that it's insufficient to use Base64 if you have Unicode characters in your string. Thus you need to use a workaround that consists of using encodeURIComponent and a replace. I don't really get why the replace is needed and shortened it to btoa(escape('✓ à la mode')) to encode the string. The result of that operation would be JXUyNzEzJTIwJUUwJTIwbGElMjBtb2Rl.
Using PowerShell to decode the string back to the original, I need to first undo the Base64 encoding. In order to do System.Convert can be used (which results in a byte array) and its output can be converted to a UTF-8 string using System.Text.Encoding. Together this would look like the following:
$bytes = [System.Convert]::FromBase64String($inputstring)
$utf8string = [System.Text.Encoding]::UTF8.GetString($bytes)
What's left to do is URL decode the whole thing. As it is a UTF-8 string I'd expect only to need to run the URL decode without any further parameters. But if you do that you end up with a accented a that looks like � in a file or ? on the console. To get the actual original string it's necessary to tell the URL decode to use UTF-7 as the character set. It's nice that this works but I don't really get why it's necessary since the string should be UTF-8 and UTF-8 certainly supports an accented a. See the last two lines of the entire script for what I mean. With those two lines you will end up with one line that has the garbled text and one which has the original text in the same file encoded as UTF-8
Entire PowerShell script:
Add-Type -AssemblyName System.Web
$inputstring = "JXUyNzEzJTIwJUUwJTIwbGElMjBtb2Rl"
$bytes = [System.Convert]::FromBase64String($inputstring)
$utf8string = [System.Text.Encoding]::UTF8.GetString($bytes)
[System.Web.HttpUtility]::UrlDecode($utf8string) | Out-File -Encoding utf8 C:\temp\output.txt
[System.Web.HttpUtility]::UrlDecode($utf8string, [System.Text.UnicodeEncoding]::UTF7) | Out-File -Append -Encoding utf8 C:\temp\output.txt
Clarification:
The problem isn't the conversion of the Base64 to UTF-8. The problem is some inconsistent behavior of the UrlDecode of C#. If you run escape('✓ à la mode') in your browser you will end up with the following string %u2713%20%E0%20la%20mode. So we have a Unicode representation of the check mark and a HTML entity for the á. If we use this directly in UrlDecode we end up with the same error. My current assumption would be that it's an issue with the encoding of the PowerShell window and pasting characters into it.
Turns out it actually isn't all that strange. It's just for what I want to do it's advantages to use a newer function. I'm still not sure why it works if you use the UTF-7 encoding. But anyways, as an explanation:
... The hexadecimal form for characters, whose code unit value is 0xFF or less, is a two-digit escape sequence: %xx. For characters with a greater code unit, the four-digit format %uxxxx is used.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/escape
As TesselatingHecksler pointed out What is the proper way to URL encode Unicode characters? would indicate that the %u format wasn't formerly standardized. A newer version to escape characters exists though, which is encodeURIComponent.
The encodeURIComponent() function encodes a Uniform Resource Identifier (URI) component by replacing each instance of certain characters by one, two, three, or four escape sequences representing the UTF-8 encoding of the character (will only be four escape sequences for characters composed of two "surrogate" characters).
The output of this function actually works with the C# implementation of UrlDecode without supplying an additional encoding of UTF-7.
The original linked Mozilla article about a Base64 encode for an UTF-8 strings modifies the whole process in a way to allows you to just call the Base64 decode function in order to get the whole string. This is realized by converting the URL encode version of the string to bytes.

Writing UTF16 file with std::fstream

Is it possible to imbue a std::fstream so that a std::string containing UTF-8 encoding can be streamed to an UTF-16 file?
I tried the following using the utf8-to-utf16 facet, but the result file is still UTF-8:
std::fstream utf16_stream("test.txt", std::ios_base::trunc | std::ios_base::out);
utf16_stream.imbue(std::locale(std::locale(), new codecvt_utf8_utf16<wchar_t,
std::codecvt_mode(std::generate_header | std::little_endian)>);
std::string utf8_string = "\x54\\xE2\x83\xac\x73\x74";
utf16_stream << utf8_string;
References for the codecvt_utf8_utf16 facet seem to indicate it can be used to read and write UTF-8 files, not UTF-16 - is that correct, and if so, is there a simple way to do what I want to do?
file streams (by virtue of the requirements of std::basic_filebuf §22.4.1.4.2[locale.codecvt.virtuals]/3) do not support N:M character encoding conversions as would be the case with UTF8 internal / UTF16 external.
You'd have to build a UTF-16 string, e.g. by using wstring_convert, reinterpret it as a sequence of bytes, and output it using usual (non-converting) std::ofstream.
Or, alternatively, convert UTF-8 to wide first, and then use std::codecvt_utf16 which produces UTF-16 as a sequence of bytes, and therefore, can be used with file streams.