How do I print a UTF-16 string in Zig? - unicode

I've been trying to code a UTF-16 string structure, and although the standard library provides a unicode module, it doesn't seem to provide a way to print out a slice of u16.
I've tried this:
const std = #import("std");
const unicode = std.unicode;
const stdout = std.io.getStdOut().outStream();
pub fn main() !void {
const unicode_str = unicode.utf8ToUtf16LeStringLiteral("😎 hello! 😎");
try stdout.print("{}\n", .{unicode_str});
}
This outputs:
[12:0]u16#202e9c
Is there a way to print a unicode string ([]u16) without converting it back into a non-unicode string ([]u8)?

Both []const u8 and []const u16 store encoded unicode codepoints. Unicode codepoints fit within the range 0..1,114,112 so an actual unicode string with one array index per codepoint would have to be []const u21. utf-8 and utf-16 both require encoding for codepoints that don't fit. Unless there is a compatability reason for utf-16 (like some windows functions), you should probably be using []const u8 unicode strings.
To print utf-16 to a utf-8 stream, you have to decode utf-16 and re-encode it into utf-8. There is currently no formatting specifier to do this automatically.
You can either convert the entire string at once, requiring allocation:
const utf8string = try std.unicode.utf16leToUtf8Alloc(alloc, utf16le);
Or, without allocation:
var writer = std.io.getStdOut().writer();
var it = std.unicode.Utf16LeIterator.init(utf16le);
while (try it.nextCodepoint()) |codepoint| {
var buf: [4]u8 = [_]u8{undefined} ** 4;
const len = try std.unicode.utf8Encode(codepoint, &buf);
try writer.writeAll(buf[0..len]);
}
Note that this will be very slow without using a buffered writer if you are writing somewhere that requires a syscall to write.

Related

UTF-16LE txt file decode as String in Flutter (dart)

To read a file contents of .txt file I am using
List<String> linesList = await file.readAsLines(encoding: latin1);
return linesList;
Files with Encodng UTF-8 are working perfectly with this above code.
But for Encoding UTF-16LE its returning a list with length double of the lines in the file but are all empty except first line. This first index contains ÿþ#
As package:utf is now abandoned (and therefore will never support null-safety), another way to read a UTF-16LE file as a String is to take advantage of Dart Strings using UTF-16 code units internally. You therefore can read the file, interpret the data as 16-bit (unsigned) integers, and then create a String using those as UTF-16 code units:
Basically:
var f = File("utf-16le.txt");
var bytes = f.readAsBytesSync();
// Note that this assumes that the system's native endianness is the same as
// the file's.
var utf16CodeUnits = bytes.buffer.asUint16List();
var s = String.fromCharCodes(utf16CodeUnits);
I also leave it as an exercise for the reader to deal with potential BOMs at the beginning of the file.
Also see https://github.com/dart-lang/convert/issues/30, which requests that the Dart SDK provide UTF-16 conversion functions.
So the first credit goes to #Richard_Heap , who has commented on the above question. He mentioned a dart package that encodes and decodes UTF formats. I have been able to decode the txt files as expected in my Flutter app using this package.
First I am identifying the utf format type by these functions available from the package mentioned by #Richard_Heap
List<int> bytes = await file.readAsBytes();
hasUtf16beBom(bytes)
hasUtf16Bom(bytes)}
hasUtf16leBom(bytes)
hasUtf32beBom(bytes)
hasUtf32Bom(bytes)
hasUtf32leBom(bytes)
There are different decoder & encoder functions in this package that can be used once the utf format is known using these above functions. Like I used
String decodedString = decodeUtf16le(bytes);
Check out the charset package. It is null-safe and supports both UTF-16BE and UTF-16LE according to documentation.
Usage:
import 'package:charset/charset.dart';
main() {
// default
print(utf16.decode([254, 255, 78, 10, 85, 132, 130, 229, 108, 52]));
print(utf16.encode("上善若水"));
// detect
print(hasUtf16Bom([0xFE, 0xFF, 0x6C, 0x34]));
// advance
Utf16Encoder encoder = utf16.encoder as Utf16Encoder;
print(encoder.encodeUtf16Be("上善若水", false));
print(encoder.encodeUtf16Le("上善若水", true));
}

Convert persian unicode to Ascii

I need to get the ASCII code of a Persian string to use it in a program. But the method below give the ? marks: "??? ????"
public string PerisanAscii()
{
//persian string
string unicodeString = "صبح بخیر";
// Create two different encodings.
Encoding ascii = Encoding.ASCII;
Encoding unicode = Encoding.Unicode;
// Convert the string into a byte array.
byte[] unicodeBytes = unicode.GetBytes(unicodeString);
// Perform the conversion from one encoding to the other.
byte[] asciiBytes = Encoding.Convert(unicode, ascii, unicodeBytes);
// Convert the new byte[] into a char[] and then into a string.
char[] asciiChars = new char[ascii.GetCharCount(asciiBytes, 0, asciiBytes.Length)];
ascii.GetChars(asciiBytes, 0, asciiBytes.Length, asciiChars, 0);
string asciiString = new string(asciiChars);
return asciiString;
}
Can you help me?
Best regards,
Mohsen
You can convert Persian UTF8 data to Windows-1256 (Arabic Windows):
var enc1256 = Encoding.GetEncoding("windows-1256");
var data = enc1256.GetBytes(unicodeString);
System.IO.File.WriteAllBytes(path, data);
ASCII does not support Persian. You may need old school Iran System encoding standard. This is determined by your Autocad application. I don't know if there is a direct Encoding in windows for it or not. But you can convert characters manually too. It's a simple mapping.

What is the character encoding?

I have several characters that aren't recognized properly.
Characters like:
º
á
ó
(etc..)
This means that the characters encoding is not utf-8 right?
So, can you tell me what character encoding could it be please.
We don't have nearly enough information to really answer this, but the gist of it is: you shouldn't just guess. You need to work out where the data is coming from, and find out what the encoding is. You haven't told us anything about the data source, so we're completely in the dark. You might want to try Encoding.Default if these are files saved with something like Notepad.
If you know what the characters are meant to be and how they're represented in binary, that should suggest an encoding... but again, we'd need to know more information.
read this first http://www.joelonsoftware.com/articles/Unicode.html
There are two encodings: the one that was used to encode string and one that is used to decode string. They must be the same to get expected result. If they are different then some characters will be displayed incorrectly. we can try to guess if you post actual and expected results.
I wrote a couple of methods to narrow down the possibilities a while back for situations just like this.
static void Main(string[] args)
{
Encoding[] matches = FindEncodingTable('Ÿ');
Encoding[] enc2 = FindEncodingTable(159, 'Ÿ');
}
// Locates all Encodings with the specified Character and position
// "CharacterPosition": Decimal position of the character on the unknown encoding table. E.G. 159 on the extended ASCII table
//"character": The character to locate in the encoding table. E.G. 'Ÿ' on the extended ASCII table
static Encoding[] FindEncodingTable(int CharacterPosition, char character)
{
List matches = new List();
byte myByte = (byte)CharacterPosition;
byte[] bytes = { myByte };
foreach (EncodingInfo encInfo in Encoding.GetEncodings())
{
Encoding thisEnc = Encoding.GetEncoding(encInfo.CodePage);
char[] chars = thisEnc.GetChars(bytes);
if (chars[0] == character)
{
matches.Add(thisEnc);
break;
}
}
return matches.ToArray();
}
// Locates all Encodings that contain the specified character
static Encoding[] FindEncodingTable(char character)
{
List matches = new List();
foreach (EncodingInfo encInfo in Encoding.GetEncodings())
{
Encoding thisEnc = Encoding.GetEncoding(encInfo.CodePage);
char[] chars = { character };
byte[] temp = thisEnc.GetBytes(chars);
if (temp != null)
matches.Add(thisEnc);
}
return matches.ToArray();
}
Encoding is the form of modifying some existing content; thus allowing it to be parsed by the required destination protocols.
An example of encoding can be seen when browsing the internet:
The URL you visit: www.example.com, may have the search facility to run custom searches via the URL address:
www.example.com?search=...
The following variables on the URL require URL encoding. If you was to write:
www.example.com?search=cat food cheap
The browser wouldn't understand your request as you have used an invalid character of ' ' (a white space)
To correct this encoding error you should exchange the ' ' with '%20' to form this URL:
www.example.com?search=cat%20food%20cheap
Different systems use different forms of encoding, in this example I have used standard Hex encoding for a URL. In other applications and instances you may find the need to use other types of encoding.
Good Luck!

What's the CFString Equiv of NSString's UTF8String?

I'm stuck on stoopid today as I can't convert a simple piece of ObjC code to its Cpp equivalent. I have this:
const UInt8 *myBuffer = [(NSString*)aRequest UTF8String];
And I'm trying to replace it with this:
const UInt8 *myBuffer = (const UInt8 *)CFStringGetCStringPtr(aRequest, kCFStringEncodingUTF8);
This is all in a tight unit test that writes an example HTTP request over a socket with CFNetwork APIs. I have working ObjC code that I'm trying to port to C++. I'm gradually replacing NS API calls with their toll free bridged equivalents. Everything has been one for one so far until this last line. This is like the last piece that needs completed.
This is one of those things where Cocoa does all the messy stuff behind the scenes, and you never really appreciate just how complicated things can be until you have to roll up your sleeves and do it yourself.
The simple answer for why it's not 'simple' is because NSString (and CFString) deal with all the complicated details of dealing with multiple character sets, Unicode, etc, etc, while presenting a simple, uniform API for manipulating strings. It's object oriented at its best- the details of 'how' (NS|CF)String deals with strings that have different string encodings (UTF8, MacRoman, UTF16, ISO 2022 Japanese, etc) is a private implementation detail. It all 'just works'.
It helps to understand how [#"..." UTF8String] works. This is a private implementation detail, so this isn't gospel, but based on observed behavior. When you send a string a UTF8String message, the string does something approximating (not actually tested, so consider it pseudo-code, and there's actually simpler ways to do the exact same thing, so this is overly verbose):
- (const char *)UTF8String
{
NSUInteger utf8Length = [self lengthOfBytesUsingEncoding:NSUTF8StringEncoding];
NSMutableData *utf8Data = [NSMutableData dataWithLength:utf8Length + 1UL];
char *utf8Bytes = [utf8Data mutableBytes];
[self getBytes:utf8Bytes
maxLength:utf8Length
usedLength:NULL
encoding:NSUTF8StringEncoding
options:0UL
range:NSMakeRange(0UL, [self length])
remainingRange:NULL];
return(utf8Bytes);
}
You don't have to worry about the memory management issues of dealing with the buffer that -UTF8String returns because the NSMutableData is autoreleased.
A string object is free to keep the contents of the string in whatever form it wants, so there's no guarantee that its internal representation is the one that would be most convenient for your needs (in this case, UTF8). If you're using just plain C, you're going to have to deal with managing some memory to hold any string conversions that might be required. What was once a simple -UTF8String method call is now much, much more complicated.
Most of NSString is actually implemented in/with CoreFoundation / CFString, so there's obviously a path from a CFStringRef -> -UTF8String. It's just not as neat and simple as NSString's -UTF8String. Most of the complication is with memory management. Here's how I've tackled it in the past:
void someFunction(void) {
CFStringRef cfString; // Assumes 'cfString' points to a (NS|CF)String.
const char *useUTF8StringPtr = NULL;
UInt8 *freeUTF8StringPtr = NULL;
CFIndex stringLength = CFStringGetLength(cfString), usedBytes = 0L;
if((useUTF8StringPtr = CFStringGetCStringPtr(cfString, kCFStringEncodingUTF8)) == NULL) {
if((freeUTF8StringPtr = malloc(stringLength + 1L)) != NULL) {
CFStringGetBytes(cfString, CFRangeMake(0L, stringLength), kCFStringEncodingUTF8, '?', false, freeUTF8StringPtr, stringLength, &usedBytes);
freeUTF8StringPtr[usedBytes] = 0;
useUTF8StringPtr = (const char *)freeUTF8StringPtr;
}
}
long utf8Length = (long)((freeUTF8StringPtr != NULL) ? usedBytes : stringLength);
if(useUTF8StringPtr != NULL) {
// useUTF8StringPtr points to a NULL terminated UTF8 encoded string.
// utf8Length contains the length of the UTF8 string.
// ... do something with useUTF8StringPtr ...
}
if(freeUTF8StringPtr != NULL) { free(freeUTF8StringPtr); freeUTF8StringPtr = NULL; }
}
NOTE: I haven't tested this code, but it is modified from working code. So, aside from obvious errors, I believe it should work.
The above tries to get the pointer to the buffer that CFString uses to store the contents of the string. If CFString happens to have the string contents encoded in UTF8 (or a suitably compatible encoding, such as ASCII), then it's likely CFStringGetCStringPtr() will return non-NULL. This is obviously the best, and fastest, case. If it can't get that pointer for some reason, say if CFString has its contents encoded in UTF16, then it allocates a buffer with malloc() that is large enough to contain the entire string when its is transcoded to UTF8. Then, at the end of the function, it checks to see if memory was allocated and free()'s it if necessary.
And now for a few tips and tricks... CFString 'tends to' (and this is a private implementation detail, so it can and does change between releases) keep 'simple' strings encoded as MacRoman, which is an 8-bit wide encoding. MacRoman, like UTF8, is a superset of ASCII, such that all characters < 128 are equivalent to their ASCII counterparts (or, in other words, any character < 128 is ASCII). In MacRoman, characters >= 128 are 'special' characters. They all have Unicode equivalents, and tend to be things like extra currency symbols and 'extended western' characters. See Wikipedia - MacRoman for more info. But just because a CFString says it's MacRoman (CFString encoding value of kCFStringEncodingMacRoman, NSString encoding value of NSMacOSRomanStringEncoding) doesn't mean that it has characters >= 128 in it. If a kCFStringEncodingMacRoman encoded string returned by CFStringGetCStringPtr() is composed entirely of characters < 128, then it is exactly equivalent to its ASCII (kCFStringEncodingASCII) encoded representation, which is also exactly equivalent to the strings UTF8 (kCFStringEncodingUTF8) encoded representation.
Depending on your requirements, you may be able to 'get by' using kCFStringEncodingMacRoman instead of kCFStringEncodingUTF8 when calling CFStringGetCStringPtr(). Things 'may' (probably) be faster if you require strict UTF8 encoding for your strings but use kCFStringEncodingMacRoman, then check to make sure the string returned by CFStringGetCStringPtr(string, kCFStringEncodingMacRoman) only contains characters that are < 128. If there are characters >= 128 in the string, then go the slow route by malloc()ing a buffer to hold the converted results. Example:
CFIndex stringLength = CFStringGetLength(cfString), usedBytes = 0L;
useUTF8StringPtr = CFStringGetCStringPtr(cfString, kCFStringEncodingUTF8);
for(CFIndex idx = 0L; (useUTF8String != NULL) && (useUTF8String[idx] != 0); idx++) {
if(useUTF8String[idx] >= 128) { useUTF8String = NULL; }
}
if((useUTF8String == NULL) && ((freeUTF8StringPtr = malloc(stringLength + 1L)) != NULL)) {
CFStringGetBytes(cfString, CFRangeMake(0L, stringLength), kCFStringEncodingUTF8, '?', false, freeUTF8StringPtr, stringLength, &usedBytes);
freeUTF8StringPtr[usedBytes] = 0;
useUTF8StringPtr = (const char *)freeUTF8StringPtr;
}
Like I said, you don't really appreciate just how much work Cocoa does for you automatically until you have to do it all yourself. :)
In the sample code above, the following appears:
CFIndex stringLength = CFStringGetLength(cfString)
stringLength is then being used to malloc() a temporary buffer of that many bytes, plus 1.
But the header file for CFStringGetLength() expressly says it returns the number of 16-bit Unicode characters, not bytes. So if some of those Unicode characters are outside the ASCII range, the malloc() buffer won't be long enough to hold the UTF-8 conversion of the string.
Perhaps I'm missing something, but to be absolutely safe, the number of bytes needed to hold N arbitrary Unicode characters is at most 4*n, when they're all converted to UTF-8.
From the documentation:
Whether or not this function returns a valid pointer or NULL depends on many factors, all of which depend on how the string was created and its properties. In addition, the function result might change between different releases and on different platforms. So do not count on receiving a non-NULL result from this function under any circumstances.
You should use CFStringGetCString if CFStringGetCStringPtr returns NULL.
Here's some working code. I started with #johne's answer, replaced CFStringGetBytes with CFStringGetLength for simplicity, and made the correction suggested by #Doug.
const char *useUTF8StringPtr = NULL;
char *freeUTF8StringPtr = NULL;
if ((useUTF8StringPtr = CFStringGetCStringPtr(cfString, kCFStringEncodingUTF8)) == NULL)
{
CFIndex stringLength = CFStringGetLength(cfString);
CFIndex maxBytes = 4 * stringLength + 1;
freeUTF8StringPtr = malloc(maxBytes);
CFStringGetCString(cfString, freeUTF8StringPtr, maxBytes, kCFStringEncodingUTF8);
useUTF8StringPtr = freeUTF8StringPtr;
}
// ... do something with useUTF8StringPtr...
if (freeUTF8StringPtr != NULL)
free(freeUTF8StringPtr);
If it's destined for a socket, perhaps CFStringGetBytes() would be your best choice?
Also note that the documentation for CFStringGetCStringPtr() says:
This function either returns the requested pointer immediately, with no memory allocations and no copying, in constant time, or returns NULL. If the latter is the result, call an alternative function such as the CFStringGetCString function to extract the characters.
Here's a way to printf a CFStringRef which implies we get a '\0'-terminated string from a CFStringRef:
// from: http://lists.apple.com/archives/carbon-development/2001/Aug/msg01367.html
// by Ali Ozer
// gcc -Wall -O3 -x objective-c -fobjc-exceptions -framework Foundation test.c
#import <stdio.h>
#import <Foundation/Foundation.h>
/*
This function will print the provided arguments (printf style varargs) out to the console.
Note that the CFString formatting function accepts "%#" as a way to display CF types.
For types other than CFString and CFNumber, the result of %# is mostly for debugging
and can differ between releases and different platforms. Cocoa apps (or any app which
links with the Foundation framework) can use NSLog() to get this functionality.
*/
void show(CFStringRef formatString, ...) {
CFStringRef resultString;
CFDataRef data;
va_list argList;
va_start(argList, formatString);
resultString = CFStringCreateWithFormatAndArguments(NULL, NULL, formatString, argList);
va_end(argList);
data = CFStringCreateExternalRepresentation(NULL, resultString,
CFStringGetSystemEncoding(), '?');
if (data != NULL) {
printf ("%.*s\n", (int)CFDataGetLength(data), CFDataGetBytePtr(data));
CFRelease(data);
}
CFRelease(resultString);
}
int main(void)
{
// To use:
int age = 25;
CFStringRef name = CFSTR("myname");
show(CFSTR("Name is %#, age is %d"), name, age);
return 0;
}

How to a recover a text from a wrong encoding?

I have got some files created from some asian OS (chinese and japanese XPs)
the file name is garbled, for example:
иè+¾«Ñ¡Õä²ØºÏ¼­
how i can recover the original text?
I tried with this in c#
Encoding unicode = Encoding.Unicode;
Encoding cinese = Encoding.GetEncoding(936);
byte[] chineseBytes = chinese.GetBytes(garbledString);
byte[] unicodeBytes = Encoding.Convert(unicode, chinese, chineseBytes);
//(Then convert byte in string)
and tried to change unicode to windows-1252 but no luck
It's a double-encoded text. The original is in Windows-936, then some application assumed the text is in ISO-8869-1 and encoded the result to UTF-8. Here is an example how to decode it in Python:
>>> print 'иè+¾«Ñ¡Õä²ØºÏ¼­'.decode('utf8').encode('latin1').decode('cp936')
新歌+精选珍藏合辑
I'm sure you can do something similar in C#.
Encoding unicode = Encoding.Unicode;
That's not what you want. “Unicode” is Microsoft's totally misleading name for what is really the UTF-16LE encoding. UTF-16LE plays no part here, what you have is a simple case where a 936 string has been misdecoded as 1252.
Windows codepage 1252 is similar but not the same as ISO-8859-1. There is no way to tell which is in the example string as it does not contain any of the bytes 0x80-0x9F which are different in the two encodings, but I'm assuming 1252 because that's the standard codepage on a western Windows install.
Encoding latin= Encoding.getEncoding(1252);
Encoding chinese= Encoding.getEncoding(936);
chinese.getChars(latin.getBytes(s));
The first argument to Encoding.Convert is the source encoding, Shouldn't that be chinese in your case? So
Encoding.Convert(chinese, unicode, chineseBytes);
might actually work. Because, after all, you want to convert CP-936 to Unicode and not vice-versa. And I'd suggest you don't even try bothering with CP-1252 since your text there is very likely not Latin.
This is an old question, but I just ran into the same situation while trying to migrate WordPress upload files off of an old Windows Server 2008 R2 server. bobince's answer set me on the right track, but I had to search for the right encoding/decoding pair.
With the following C#, I found the relevant encoding/deciding pair:
using System;
using System.Text;
public class Program
{
public static void Main()
{
// garbled
string s = "2020竹慶本樂ä»æ³¢åˆ‡äºžæ´²æ³•ç­µ-Intro-2-1024x643.jpg";
// expected
string t = "2020竹慶本樂仁波切亞洲法筵-Intro-2-1024x643.jpg";
foreach( EncodingInfo ei in Encoding.GetEncodings() ) {
Encoding e = ei.GetEncoding();
foreach( EncodingInfo ei2 in Encoding.GetEncodings() ) {
Encoding e2 = ei2.GetEncoding();
var s2 = e2.GetString(e.GetBytes(s));
if (s2 == t) {
var x = ei.CodePage;
Console.WriteLine($"e1={ei.DisplayName} (CP {ei.CodePage}), e2={ei2.DisplayName} (CP {ei2.CodePage})");
Console.WriteLine(t);
Console.WriteLine(s2);
}
}
}
Console.WriteLine("-----------");
Console.WriteLine(t);
Console.WriteLine(Encoding.GetEncoding(65001).GetString(Encoding.GetEncoding(1252).GetBytes(s)));
}
}
It turned out that the correct encoding/deciding in my case was:
e1=Western European (Windows) (CP 1252), e2=Unicode (UTF-8) (CP 65001)
So the last line of code is a one-liner for the correct conversion Console.WriteLine(Encoding.GetEncoding(65001).GetString(Encoding.GetEncoding(1252).GetBytes(s)));.