Improper utf16 count for certain emoji's - swift

Certain emoji's that are actually a combination of emoji's result in the incorrect (or at least a count that doesn't match websites like Twitter).
Example problem is pretty straight forward
👩‍⚖ <- this emoji is a female judge (woman + scale)
The count of this emoji is 1
the utf16 count of this emoji in Swift is 4
let tweet = "👩‍⚖"
print(tweet.utf16.count)
However, pasting this emoji into Twitter (which is an emoji that Twitter doesn't seem to support you are given the two emoji's. Woman & Scale. Woman is 2 characters and scale is 2 characters when using utf16 count. However, Twitter seems to have a hidden albeit counted invisible character. You will notice this when you try to delete the characters. I'm wondering if there's some way to properly match Twitters count when on mobile. I've seen other websites which, while properly showing the single emoji, are still getting the proper count.
Thanks.

Upgraded Twitter pod and used weighted length for the solution here.
https://developer.twitter.com/en/docs/developer-utilities/twitter-text
For those who end up here with a similar problem, Twitter recommends you show progress rather than exact counts by using permillage.

Related

How to combine two characters into one character?

I am building a text editor app. I want to implement numbers & bullets like apple's notes app. For numbers I am using "1." (number + full stop). Now the problem is when I try to remove "1." using backspace, I will have to press backspace twice because "1." treated as two characters. This problem becomes worse when my list reaches "10." or "100.".
I have been looking into unicode characters. There are unicodes for "1." (\u{2488}), "2." (\u{2489}), but these are upto "20." (\u{249B}) only. I could not find more than that anywhere.
Can you give me a solution or a different approach to solve my problem ?
Thank You !

How to obtain a full list of Unicode emojis from the Unicode website

I'm building an application that requires the use of emojis, specifically generating large sequences of random emojis. This requires having a large list to pull from. Rather than taking the approach detailed here by looping over hardcode hex ranges, I decided to take a different approach and download and parse data from the Unicode website. From there, I do some code-generation and write all the unique emojis to disk which I can then pick up inside my application. All this happens either as a manual step or a build step for my app.
However, the Unicode specification is complicated and I'm unsure which data I should be pulling from to build up a definitive list. There are three files under the latest version of Unicode (14.0):
emoji-sequences.txt
emoji-test.txt
emoji-zwj-sequences.txt
There are also two files in the Unicode Character Database (UCD):
emoji-data.txt
emoji-variation-sequences.txt
There are definitely duplicates amongst all these lists such as 😀 and while I could download and parse all five files and reduce the list down to unique instances in my script, I'd like to keep my script as simple as I can without doing unnecessary work.
From what I understand:
emoji-test.txt is a grouping of emoji characters as you might see in a keyboard, grouped by category
emoji-sequences.txt is a list of emoji ranges, single emojis, and multi character emojis such as 🇦🇨 (1F1E6 1F1E8) or emojis combined with a variation selector like FE0F
emoji-zwj-sequences.txt is a list of emojis joined by the zero width joiner character
emoji-variation-sequences is a list of emojis that can be presented either in textual form or as emojis
emoji-data.txt seems to be a very comprehensive list of not just emojis but also emoji modifiers and the like
All this has left me rather perplexed as to which list or combination of lists would give me the most comprehensive list of emojis. emoji-data.txt seems to have a most wide-ranging list but I don't want things like emoji modifiers or emoji components; I'm only looking for emojis that a user can select with the keyboard (for example you can't select a skin tone modifier by itself).
Which lists or combination of lists would yield the most comprehensive, wide-ranging list of emojis that I could use in my app?
Use the union of emoji-sequences.txt and emoji-zwj-sequences.txt. That set comprises the emoji recommended for general interchange. see https://www.unicode.org/reports/tr51/tr51-19.html#def_rgi_set.

Unicode characters aren't combined properly

I am working with some Devanagari text data I want to display in the browser. Unfortunately, there's one combination of nonspacing combining characters that doesn't get rendered as a proberly combined character.
The problem occurs every time a base character is combined with the Devanagari Stress Sign Udatta ॑ (U+0951) and the Devanagari Sign Visarga ः (U+0903).
An example for this would be र॑ः, which is र (U+0930) + ॑ + ः and should be rendered as one character. But the stress sign and the other one don't seem to like each other (as you can see above!).
It's no problem to combine the base char with each of the other two signs alone, btw: र॑ / रः
I already tried to use several fonts which should be able to render Devanagari characters (some Noto fonts, Siddhanta, GentiumPlus) and tested it with different browsers, but the problem seems to be something else.
Does anyone have an idea? Is this not a valid combination of symbols?
EDIT: I just tried to switch around the two marks just to see what if - it renders as रः॑, so U+0951 and U+0903 don't seem to have the same function, as the stress sign gets rendered on top of the other mark.
It looks like i don't understand Unicode enough, yet.
This is NOT a solution for your problem, but might be useful information:
I am working with some Devanagari text data I want to display in the
browser.
Like you, I couldn't get this to work in any browser despite trying several fonts, including Arial Unicode MS:
The browser was simply rendering the text Devanagari Test: रः॑ from within the <body> of a JSP. The stress sign is clearly appearing above the Sign Visarga instead of the base character.
Is this not a valid combination of symbols?
It is a valid combination. I don't know Devanagari, so I don't know whether it is semantically "valid", but it is trivial to generate exactly the character you want from a Java application:
System.out.println("Devanagari test: \u0930\u0903\u0951");
This is the output from executing the println() call, showing the stress sign above the base character:
The screenshot above is from NetBeans 8.2 on Windows 10, but the rendering also worked fine using the latest releases of Eclipse and Intellij IDEA. The constraints are:
The three characters must be specified in that order in println() for the rendering to work.
The Sign Visarga and the Stress Sign Udatta must be presented in their Unicode form. Pasting their glyph representations into the source code won't work, although this can be done for the base character.
An appropriate font must be used for the display. I used Arial Unicode MS for the screen shot above, but other fonts such as Serif, SansSerif and Monospaced also worked.
Does anyone have an idea?
Unfortunately not, although it is clear that:
The grapheme you want to render exists, and is valid.
Although it won't render in a browser, it can be written to the console by a Java application.
The problem seems to be that all browsers apply the diacritic (Stress Sign Udatta) to the immediately preceding character rather than the base character.
See Why are some combining diacritics shifted to the right in some programs? for more information on this.

Typeahead Bloodhound - Filter

My index contains the word dog how can i also find this entry if i type dogs? I would find all parts of the word 'dogs','dog','do' to a min length of 2 or 3 chars
I'm not an expert on Bloodhound, but what you're talking about here is called stemming, and it seems like the kind of thing that you could do using the datumTokenizer and the queryTokenizer.
There are stemmers for most languages of varying quality, but I think the one most people are using for English these days is the Snowball Stemmer. There are a number of implementations in JavaScript floating around.
In general for things to work properly you'll want to stem both the uer's query and the results.

Count the number of words in NSString

I'm trying to implement a word count function for my app that uses UITextView.
There's a space between two words in English, so it's really easy to count the number of words in an English sentence.
The problem occurs with Chinese and Japanese word counting because usually, there's no any space in the entire sentence.
I checked with three different text editors in iPad that have a word count feature and compare them with MS Words.
For example, here's a series of Japanese characters meaning the world's idea: 世界(the world)の('s)アイデア(idea)
世界のアイデア
1) Pages for iPad and MS Words count each character as one word, so it contains 7 words.
2) iPad text editor P*** counts the entire as one word --> They just used space to separate words.
3) iPad text editor i*** counts them as three words --> I believe they used CFStringTokenizer with kCFStringTokenizerUnitWord because I could get the same result)
I've researched on the Internet, and Pages and MS Words' word counting seems to be correct because each Chinese character has a meaning.
I couldn't find any class that counts the words like Pages or MS Words, and it would be very hard to implement it from scratch because besides Japanese and Chinese, iPad supports a lot of different foreign languages.
I think CFStringTokenizer with kCFStringTokenizerUnitWord is the best option though.
Is there a way to count words in NSString like Pages and MSWords?
Thank you
I recommend keep using CFStringTokenizer. Because it's platform feature, so will be upgraded by platform upgrade. And many people in Apple are working hardly to reflect real cultural difference. Which are hard to know for regular developers.
This is hard because this is not a programming problem essentially. This is a human cultural linguistic problem. You need a human language specialist for each culture. For Japanese, you need Japanese culture specialist. However, I don't think Japanese people needs word count feature seriously, because as I heard, the concept of word itself is not so important in the Japanese culture. You should define concept of word first.
And I can't understand why you want to force concept of word count into the character count. The Kanji word that you instanced. This is equal with counting universe as 2 words by splitting into uni + verse by meaning. Not even a logic. Splitting word by it's meaning is sometimes completely wrong and useless by the definition of word. Because definition of word itself are different by the cultures. In my language Korean, word is just a formal unit, not a meaning unit. The idea that each word is matching to each meaning is right only in roman character cultures.
Just give another feature like character counting for the users in east-asia if you think need it. And counting character in unicode string is so easy with -[NSString length] method.
I'm a Korean speaker, (so maybe out of your case :) and in many cases we count characters instead of words. In fact, I never saw people counting words in my whole life. I laughed at word counting feature on MS word because I guessed nobody would use it. (However now I know it's important in roman character cultures.) I have used word counting feature only once to know it works really :) I believe this is similar in Chinese or Japanese. Maybe Japanese users use the word counting because their basic alphabet is similar with roman characters which have no concept of composition. However they're using Kanji heavily which are completely compositing, character-centric system.
If you make word counting feature works greatly on those languages (which are using by people even does not feel any needs to split sentences into smaller formal units!), it's hard to imagine someone who using it. And without linguistic specialist, the feature should not correct.
This is a really hard problem if your string doesn't contain tokens identifying word breaks (like spaces). One way I know derived from attempting to solve anagrams is this:
At the start of the string you start with one character. Is it a word? It could be a word like "A" but it could also be a part of a word like "AN" or "ANALOG". So the decision about what is a word has to be made considering all of the string. You would consider the next characters to see if you can make another word starting with the first character following the first word you think you might have found. If you decide the word is "A" and you are left with "NALOG" then you will soon find that there are no more words to be found. When you start finding words in the dictionary (see below) then you know you are making the right choices about where to break the words. When you stop finding words you know you have made a wrong choice and you need to backtrack.
A big part of this is having dictionaries sufficient to contain any word you might encounter. The English resource would be TWL06 or SOWPODS or other scrabble dictionaries, containing many obscure words. You need a lot of memory to do this because if you check the words against a simple array containing all of the possible words your program will run incredibly slow. If you parse your dictionary, persist it as a plist and recreate the dictionary your checking will be quick enough but it will require a lot more space on disk and more space in memory. One of these big scrabble dictionaries can expand to about 10MB with the actual words as keys and a simple NSNumber as a placeholder for value - you don't care what the value is, just that the key exists in the dictionary, which tells you that the word is recognised as valid.
If you maintain an array as you count you get to do [array count] in a triumphal manner as you add the last word containing the last characters to it, but you also have an easy way of backtracking. If at some point you stop finding valid words you can pop the lastObject off the array and replace it at the start of the string, then start looking for alternative words. If that fails to get you back on the right track pop another word.
I would proceed by experimentation, looking for a potential three words ahead as you parse the string - when you have identified three potential words, take the first away, store it in the array and look for another word. If you find it is too slow to do it this way and you are getting OK results considering only two words ahead, drop it to two. If you find you are running up too many dead ends with your word division strategy then increase the number of words ahead you consider.
Another way would be to employ natural language rules - for example "A" and "NALOG" might look OK because a consonant follows "A", but "A" and "ARDVARK" would be ruled out because it would be correct for a word beginning in a vowel to follow "AN", not "A". This can get as complicated as you like to make it - I don't know if this gets simpler in Japanese or not but there are certainly common verb endings like "ma su".
(edit: started a bounty, I'd like to know the very best way to do this if my way isn't it.)
If you are using iOS 4, you can do something like
__block int count = 0;
[string enumerateSubstringsInRange:range
options:NSStringEnumerationByWords
usingBlock:^(NSString *word,
NSRange wordRange,
NSRange enclosingRange,
BOOL *stop)
{
count++;
}
];
More information in the NSString class reference.
There is also WWDC 2010 session, number 110, about advanced text handling, that explains this, around minute 10 or so.
I think CFStringTokenizer with kCFStringTokenizerUnitWord is the best option though.
That's right, you have to iterate through text and simply count number of word tokens encontered on the way.
Not a native chinese/japanese speaker, but here's my 2cents.
Each chinese character does have a meaning, but concept of a word is combination of letters/characters to represent an idea, isn't it?
In that sense, there's probably 3 words in "sekai no aidia" (or 2 if you don't count particles like NO/GA/DE/WA, etc). Same as english - "world's idea" is two words, while "idea of world" is 3, and let's forget about the required 'the' hehe.
That given, counting word is not as useful in non-roman language in my opinion, similar to what Eonil mentioned. It's probably better to count number of characters for those languages.. Check around with Chinese/Japanese native speakers and see what they think.
If I were to do it, I would tokenize the string with spaces and particles (at least for japanese, korean) and count tokens. Not sure about chinese..
With Japanese you can create a grammar parser and I think it is the same with Chinese. However, that is easier said than done because natural language tends to have many exceptions, but it is not impossible.
Please note it won't really be efficient since you have to parse each sentence before being able to count the words.
I would recommend the use of a parser compiler rather than building one yourself as well to start at least you can concentrate on doing the grammar than creating the parser yourself. It's not efficient, but it should get the job done.
Also have a fallback algorithm in case your grammar didn't parse the input correctly (perhaps the input really didn't make sense to begin with) you can use the length of the string to make it easier on you.
If you build it, there could be a market opportunity for you to use it as a natural language Domain Specific Language for Japanese/Chinese business rules as well.
Just use the length method:
[#"世界のアイデア" length]; // is 7
That being said, as a Japanese speaker, I think 3 is the right answer.