I need to URL encode a non-latin string (japanese, chinese, or just non ascii characters in spanish/french/italian etc.). I can't find any encoders or snippets that deal with more than just ASCII characters to create a URL encoding. Is there a library or some feature I haven't found in OS that can create a fully compliant URL encoding from any UTF8 content?
Have you tried using
- (NSString *)stringByAddingPercentEscapesUsingEncoding:(NSStringEncoding)encoding
but specifying the appropriate NSStringEncoding related to the language choice?
You can see all the available string encodings here
Related
I am a very stupid Programme Manager and I have a client requesting us to send in either ASCII or ANSI encoding format.
Our programmers has used Unicode (UTF-16), so my question is if Unicode (UTF-16) is compatible with ASCII or ANSI? Or am I understanding this incorrectly? Are we to change encoding or?
We haven't tried anything yet.
In short: ASCII encoding contains 128 characters. ANSI encoding contains 256 characters. UTF-16 encoding has the capacity for 1,112,064 character codes. There is some nuance such as the bytes used to store each character, but I don't think that is relevant here.
You can certainly convert a UTF-16 document down to ANSI or ASCII encoding, but any characters that are beyond their specification will be lost (probably converted to the 128th or 256th character, respectively, or some sort of null character).
For you, as a manager, there are some questions. At minimum:
Why does the client need this particular encoding? Can it be accommodated in some other way?
Are any characters in your data beyond the scope of ASCII/ANSI. Most (all?) programming languages provide a method to retrieve an integer representation of a character and determine if it is beyond the range of the desired encoding. This could be leveraged to discover how many instances exist of a character not compatible with the desired encoding.
I'm currently reading mails from file and process some of the header information. Non-ASCII characters are encoded according to RFC2047 in quoted-printable oder Base64, so the files contain no non-ASCII characters . If the file is encoded in UTF-8, Win-1252 or one of the ISO-8859-* character encodings, I won't run into problems because ASCII is embedded at the same place in all these charsets (so 0x41 is a A in all of those charsets).
But what if the file is encoded using an encoding that does not embed ASCII in that way? Do encodings like this even exist? And if so, is there even a reliable way of detecting them?
There is a Charset-detector of Mozilla based on this very interesting article. It can detect a very large amount of different encodings. There is also a port to C# available on GitHub which I used before. It turned out to be quite reliable. But of course, when the text just contains ASCII characters, it cannot distinguish between the different encodings that encode ASCII in the same way. But any encodings that encode ASCII in a different way should be detected correctly with this library.
Currently it seems that in order for UTF-8 characters to display in a portal message you need to decode them first.
Here is a snippet from my code:
self.context.plone_utils.addPortalMessage(_(u'This document (%s) has already been uploaded.' % (doc_obj.Title().decode('utf-8'))))
If Titles in Plone are already UTF-8 encoded, the string is a unicode string and the underscore function is handled by i18ndude, I do not see a reason why we specifically need to decode utf-8. Usually I forget to add it and remember once I get a UnicodeError.
Any thoughts? Is this the expected behavior of addPortalMessage? Is it i18ndude that is causing the issue?
UTF-8 is a representation of Unicode, not Unicode and not a Python unicode string. In Python, we convert back and forth between Python's unicode strings and representations of unicode via encode/decode.
Decoding a UTF-8 string via utf8string.decode('utf-8') produces a Python unicode string that may be concatenated with other unicode strings.
Python will automatically convert a string to unicode if it needs to by using the ASCII decoder. That will fail if there are non-ASCII characters in the string -- because, for example, it is encoded in UTF-8.
I would like to convert RTF text to Unicode. In the RTF font table one can find the name of the font or font-face (eg. Arial Cyr, Courier Greek) and the charset to use with it (0-255). So how to write a function that converts a character code (0-255) with these settings to Unicode?
As I see, the post-tags like Greek, Cyr, Tur etc. affect the glyph of the displayed characters and the charset affects it too. So the function could have these input parameters:
fontname postfix, font charset, character code
But what is next? Or am I on the wrong way?
RTF was invented long before Unicode. It most certainly isn't ANSI text, RTF only uses ASCII, it uses a rather unholy mix of character sets with non-ASCII characters encoded in hex with a reference to the character set. The mapping is also not perfect, many Unicode codepoints have no corresponding charset.
You'll spend a lifetime creating your own RTF to Unicode converter. Take advantage of an existing solution, most any platform has one. On Windows that would be the RichEdit control. If you use .NET then it is especially simple, use the RichTextBox class, assign its Rtf property and read back its Text property. Which is utf-16 encoded Unicode.
A user complained that I don't support Polish character encodings for my iOS app, but I can't seem to figure out what encodings they're looking for.
According to the wikipedia page on the Polish alphabet:
The standard 8-bit character encoding for the Polish alphabet is ISO 8859-2 (Latin-2)
To find the CFStringEncoding for Latin-2, look under External String Encodings here, to find this information:
enum {
...
kCFStringEncodingISOLatin2 = 0x0202
EDIT: But, as suggested by Jonathan Grynspan, you really should be using Unicode (kCFStringEncodingUnicode) for everything.