writing unicode text in XCode - iphone

when I try to write an Arabic language words in xcode editor it does not display correctly, it's displayed as messed up words and reversed (the output in iPhone is OK), so it becomes harder for me to review the strings I entered in the editor, is there anyway to overcome this issue ?

I think those are bugs in Xcode (you can try changing the font, but I don't think the direction can be changed).
However, it is generally preferable to write your strings in English and then use internationalization (i18n) techniques to convert and display them in Arabic at runtime. A quick google revealed this blogpost. This solves two issues:
You can support any number of languages.
You can store your Arabic text in a separate file and edit it with an external editor, making it easier to work with.

Related

classic look of windows tab control in unicode MFC program?

I am working on an MFC dialog based program with CTabCtrl (VS2017, W10). Everything works as expected, apart from the way tabs look (convoluted story, don't ask).
I need them to look like on the right, but when I created a new project with a CDialogEx based class and added tabs to the dialog (just the standard VS/MFC stuff, nothing fancy yet) they looked like the ones on the left. What I found after some testing and comparing with older projects is that if I switch in project defaults Character Set from Unicode to Multi-Byte Character Set I get the look I want (yes, sounds completely unrelated, but checked and rechecked several times). But that's ridiculously inconvenient, program needs to work with different languages and uses Unicode libraries for managing the data.
No idea if the problem is really MFC related, could be some deeper Windows thing.
Any idea what can be done to get the right look (pun intended), other than implementing my own OwnerDraw() or adding an additional layer of code to translate between data in Unicode and MBCS? Both approaches sound pretty off.

Any method to restore garbled/distorted text file by Matlab?

I got a very weird situation that highly needs your assistance. I appreciate your effort and time in advance.
I have a machine which produces a text file that records some information of the machine's working status such as, the coordinate of the drill head and the rotating speed used at that position. While we examine the text file, it appears to be unreadable because most of the contents are garbled. Please see the attached figure. http://ppt.cc/sA1I
If I open it with UltraEdit I see: http://ppt.cc/TrnV
As you can see some part of the file is readable; however many unrecognizable characters, which should be those numeric values we want.
Two reasons that I believe this problem should be solved by Matlab. First, I am sure this machine has many built-in matlab code inside for analysis purpose. Second, we have a .exe file, which is compiled by Matlab, can restore the garbled text file into arranged and readable format (the values of the coordinates are restored).
We desperately want to see the contents of this file by ourselves. Please kindly provide solution or idea or any direction for me to solve this issue.
Sincerely,
Old question without answer: For the record, a suggestion.
Sounds like a case of Mojibake, a problem with text encoding. Here's how I solved it.
Background: I had text files created on a Mac, others on a Windows, others still on Linux, each in different text encoding. So I got a text editor that would allow me to view the format and to change it. In my case, I used TextMate on MacOS, opened the files, picked the correct encoding upon opening, which sometimes was a Windows format, a Mac format, sometimes a Latin format -- had to use trial and error to figure it out based on a preview this particular piece of software gave me. Once I had the file opened in the correct encoding, I would save it in the utf-8format, which is not platform-specific and allows me to move my text files across various computers.
There may be more scalable methods, but I only had a hundred or so files to deal with, so I opted for the manual method, in order to personally visualize the rendering on screen, and because my files came in different encoding to begin with.

Why does my system not display unicode correctly?

I wrote this question and it turns out the code is correct but it doesnt display properly on my system. I dont understand! why might it do this? My system is set to united states english. I dont know what the problem can be.
This makes it difficult to develop unicode apps when it doesn't display properly on my system :(
-Edit- To be more clear. I made a winform app using .NET and the text appears incorrect on my machine but works on others. I can copy/paste text into my app but i wont know if it ran correctly since i see nonsense instead of text. However most unicode works. Special chars (like >16bits) does not.
I assume from the question you linked to that you are on a windows machine. The problem could be that windows does not have a global encoding option at all. The united states english is a language setting which as far as I know does not mean what you expect it to mean, as in it does not set all of your programs to show text in a unicode format.
The quick answer is that especially in windows, each program that displays text to the user is responsible for the character encoding. You have to make sure that the program and the environment where the problem appears are set to display text using some unicode format, such as UTF-8.
Read up on Unicode and UTF-8

Eclipse - how to add a file that has right to left strings

I am writing a Java app using Eclipse. This app read a set of Hebrew strings (that are right-to-left). Assuming I put these strings is a separate file, how do I tell the Eclipse editor that they are right-to-left text.
I tried eclipse -dir rtl but that puts all of Eclipse in RTL mode which is not the behavior I looking for.
One alternative is to carefully use StringBuilder.reverse() to manipulate the strings as needed after they are read in.
http://download.oracle.com/javase/1.5.0/docs/api/java/lang/StringBuilder.html#reverse%28%29
You could also write some simple string manipulation methods as helpers to handle whatever special needs you have.
This page, found in a quick google search, seems to have some interesting info on how Java handles and renders Hebrew: http://mindprod.com/jgloss/hebrew.html

looking for a UTF-8 text editor

I am looking for a (simple) text editor that can handle text in different encodings in the same document.
I need to develop some sites with mixed Japanese and English text and the editors I have now (on an English Windows system) are unable to display the Japanese text.
Jedit files don't display the Japanese text I have inputted but when I look at the file in a browser it shows up correctly.
Gvim shows all Japanese text in the editor as question marks and also in the browser.
In Gvim inputting the kanji works (you input the pronounciation and then press space bar to get the kanji) but when you confirm the kanji you want it replaces that kanji with question marks. (1 question mark for every kanji).
Can someone recommend me a text editor to edit html and php files that is able to display utf-8 encoded text and also save as an utf-8 file ?
thank you.
After reading about emacs I installed it. see below.
Thanks everybody for the hints.
if you don't have a unicode font yet you have to find one online or buy one.
here are the instructions to install the font on a windows system http://support.microsoft.com/kb/314960
jEdit
I changed my font in Jedit to a UTF font and now the Japanese shows up normally.
inputting the Japanese is still problematic as you don't see what you are typing.
(to change your font to edit files go to Utilities -> Global Options -> text area
select a Unicode font and you'll be able to see the Japanese characters.
gVim
I am still trying to figure out how to add a font in gvim. Once I know how to do that I ll update this.
Emacs
Emacs does not show the kanji correctly, they are displayed as ??? but at least I can see what I type in Japanese and select the right word.
so at this point I have to say that in jEdit I can see Japanese text but I can't input Japanese text. Gvim I can input Japanese text but inside the text area it is displayed as ??? and the same goes for Emacs.
adding a font in emacs and gvim is sadly enough not a trivial task.
At the moment I use notepad with the Arial unicode MS font and saving as UTF-8 file as my Japanese editor. Not ideal but at least it works.
Notepad++ is highly recommended.
Emacs correctly handles UTF-8 for me. (And of course, it can edit HTML and PHP files).
I would recommend Vim still. The problem you were seeing with questions marks is probably an issue with the font you were using. When displaying text that contains characters not in the currently language applications typically display them as empty boxes or question marks. See here for UTF-8 support in Vim.
This section of the Vim manual is also helpful, especially for setting up UTF-8 in Windows.
There is an issue with most Unicode-aware text editors: when you select a font, they stick to it. If the font does not include a glyph for a character, then the default substitution character (I believe U+FFFD, REPLACEMENT CHARACTER) is used.
In contrast, web browsers typically try to find a glyph for the characters they have to display among all the fonts provided by the system.
So, what you need, if you don't have the font "Arial Unicode MS" or similar (including Japanese glyphs), is an editor that tries to match glyphs with other fonts except the selected one.
Until someone provides a link for such an editor, I'll suggest a (somewhat extreme :) editor:
Install the latest stable python 2.x version for MS Windows (currently 2.6).
Include "idle" in the installation.
Start → Programs → Python 2,6 → Idle (Python Gui)
The "idle" editor is typically used to edit python code (and test it interactively in the Python shell). However, it can be used as a plain fully-Unicode-aware text editor, and when saving text including non-ASCII chars, it defaults to UTF-8 encoding.
Now, idle is based on Tkinter, which is an interface to tk, which is a gui library for tcl; tcl/tk, like web browsers, when asked to display a character for which no glyph is present in the widget font, it searches other fonts too.
However far-fetched this may seem, I really believe it would help; if no other solution helps you, give it a try.
Vim works fine for me as a UTF-8 text editor.
Firstly, you need a font that has the characters you are using. Choosing another text editor won't help you with this (unless it searches for other fonts for the correct characters when the font you are using doesn't have them). If you are using gVim, you can set the font like:
set guifont=Consolas
(This is not to say that Consolas is the font you want.) You probably want to put this in the .vimrc file so that it is always used.
Secondly, Vim needs to interpret the file as UTF-8, which it doesn't always automatically do. To make it do this, do:
set encoding=utf8
You can also see what encoding it is using with:
set encoding?
EmEditor is written by a Japanese company for exactly this purpose. It is a fine text editor with good performance/simplicity but pretty much all the features expected of a capable editor; I use it as my default when on the Windows platform, as well as for editing Japanese web page templates. It deserves to be better-known IMO; it is at least as good as, say, TextPad, but with full Unicode support.
Unfortunately it is not free, however you can find a free version of the old EmEditor 6 at sites such as download.com.
You can use just Notepad.exe with the "Arial Unicode MS" font (if all of your text is left-to-right, given the English windows version). Just Save as, select UTF-8.
In general, use your favourite editor with a font like "Arial Unicode MS". I mention this one because is the font with the greatest Unicode coverage I have seen,
Try BabelPad. Editing-wise, it's simple. Unicode-support-wise, it's awesome!
It sounds like maybe the problem with Jedit is the font - are you using a font that can display all the characters correctly?
To be more precise, Arial Unicode MS is a reasonable choice for a Unicode font that can display a wide range of characters across the range of languages. There are certain issues with it that can make it less than optimal for some languages used in isolation - this is why there are also language specific Unicode fonts included with Windows.
I've never had a problem with vim as long as I use a font that actually contains the characters I want. It needs to be a monospace font. :set enc=utf8 to get to utf8 mode. Then you can use :digraph command to get a display of available characters, and see how each is displayed.
To add a font, add it in Windows (Control Panel/Fonts/Add Font). If it's a monospace font, it will then show up in vin in /Edit/Font.
Just to add another one: I just checked that Programmer's Notepad 2 has some UTF-8 setting too.
(vim and emacs do just fine as well)
EditPlus seems to be an better option for UTF-8 as I have used it.
EditPad Lite and Pro fully support Unicode as of version 6. (Disclaimer: Those are my own products.)
If you get question marks, you're using an encoding that does not support Japanese characters. In EditPad, you can change the text encoding (Unicode, legacy code pages) via Convert, Text Encoding. You can set the defaults per file type in Options, Configure File Types, Encoding.
If you see squares instead of Japanese characters, select a Japanse font or Unicode font. You can do this in EditPad via Options, Font.
To type Japanese, simply install a Japanese keyboard driver in the keyboard settings in the Windows Control Panel, if you haven't already.
EditPad Pro has preconfigured file types for PHP and HTML.
Kate. and by extension, any other KDE program that uses Kate as an embedded KPart (KWrite, Quanta+, KDevelop). It handles lots of encodings, but i like to always use UTF-8. It also has a huge collection of syntax highlightings.
Try SciTE http://gisdeveloper.tripod.com/scite.html. It's just great ;)
For very basic UTF-8 multilingual text editing, I have had good luck with BabelPad (www.babelstone.co.uk): it's free, simple and robust and displays almost everything with no fuss. When the editing needs are more severe, I resort a lot to EditPad Pro, or occasionally Notepad++. For non-Unicode editing on Windows, I'm a TextPad user--my staff and I have probably spent about 200,000 hours in TextPad, with only occasional forays into NotePad2, MadEdit, jEdit, XML Copy Editor, and EPCedit. The latter two handle UTF-8 XML files well. All of the editors mentioned above are free except TextPad and EditPad Pro. Thanks to the person who suggested Emeditor. I'll try it out. --PFSchaffner
I like jEdit for it's ability to ident wrapped lines. Really nice when editing XML files. A word of warning though: It's Java, so it's not light fast, like you would expect a text editor to be.
Text codecs are fully supported. It distinguishes between text files with and without the header identifying the file format (byte order mark), calling them UTF-8 and UTF-8Y. This is something that I'm missing in other text editors.
Try EditPlus. It has specific support for HTML, syntax highlighting and can also work as a simple IDE for any compiler.
On the Mac: SubEthaEdit has excellent support for character encodings.
TextPad is a good utility too. It's a trialware, but does the job fine. See how to set char-encoding-setting-in-textpad.
For japanese, Sakura Editor is exceptional. It can display UTF-8, EUC-JP, SJIS and so on.
http://www.ultraedit.com/ is a multiplatform editor that does UTF-8 and all kinds of conversions between formats
EditPad Pro ... is recommended for u
cheers ;)