I'm trying to create a LaTeX document with as many as Unicode characters as possible. My header is as follows:
\documentclass[12pt,a4paper]{article}
\usepackage[utf8x]{inputenc}
\usepackage{ucs}
\pagestyle{empty}
\usepackage{textcomp}
\usepackage[T1]{fontenc}
\begin{document}
The Unicode characters which follow in the document-body are in the form of
\unichar{xyz}
where xyz stands for an integer, e.g. 97 (for "a").
The problem is: for many integers the script's not compiling with an error message as:
! Package ucs Error: Unknown Unicode character 79297 = U+135C1,
(ucs) possibly declared in uni-309.def.
(ucs) Type H to see if it is available with options.
Which packages should I add to the header to make the file compile with as many characters as possible?
As far as I remember the utf8x package is merely a hack to allow for some Unicode "support". Basically it was a giant lookup table translating individual character sequences to LaTeX's expectations.
You should really use Xe(La)?TeX for such things which was designed with Unicode in mind. The old TeX still suffers from it's 1970's heritage in this respect.
ETA: From the package's documentation:
This bundle provides the ucs package, and utf8x.def, together with a large number of support files.
The utf8x.def definition file for use with inputenc covers a wider range of Unicode characters than does utf8.def in the LaTeX distribution. The ucs package provides facilities for efficient use of large sets of Unicode characters.
Since you're already using both packages I guess you're out of luck then with plain LaTeX.
You can manually edit .def files similarly to this https://bugzilla.redhat.com/show_bug.cgi?id=418981
For example this is only way how to enable LaTex for Latvian language as far as I know.
Just find you symbol (on Mac use "locate uni-1.def" to find where the package is located - to enable locate command http://osxdaily.com/2011/11/02/enable-and-use-the-locate-command-in-the-mac-os-x-terminal/ )
Related
I thought Julia supports raw unicode input, such as:
julia> test = "π£¢∞§"
"π£¢∞§"
julia> 😘 = 1 ;
julia> print(😘 )
1
However, it seems julia does not support (Apple logo).
julia> = 123
ERROR: syntax: invalid character ""
julia> test = ""
"\uf8ff"
I wonder what's the underlying reason for that, and whether there is a way I can use character in Julia?
I believe this link more properly explains the case of the unicode character that you see as apple's logo.
The problem is that the unicode value used is one of several that is set aside for private use. That means that each operating system, or application, or implementation is free to use those unicode characters for anything they want. It just so happens that Apple has chosen to use unicode character U+F8FF (decimal value 63743, or on the web as either or ) as the Apple Logo. But some Windows fonts put in a Windows logo. And some other fonts put in a Klingon Mummification glyph. Or elven script. Or anything they want. And if it isn't defined in your local font, you'll just see a square.
My opinion is that Julia simply doesn't use this special value for anything. This also explains why your "π£¢∞§" characters work nicely - they are proper unicode characters, more largely supported by different platforms.
As a side note, i too see a simple square instead of the apple logo on this instance.
Edit
Here is a list of unicode characters supported by Julia.
To expand on Alex's answer...
Apple's logo () isn't an official Unicode symbol. I think there are very few commercial logos and symbols in the main Unicode tables.
However, Unicode provides some 'anything goes' areas (called PUAs - private use areas) that companies and individuals can fill with their own symbols, so that their users can access certain special glyphs. The main PUA is U+E000 to U+F8FF. Depending on which font you're using, you'll find all kinds of stuff assigned to these codes. On a Mac, I can usually get the Apple logo at "\uf8ff", with the right font selected, but not the Ubuntu symbol or the Windows logo, unless I choose another font. (There's also a fallback mechanism, whereby if you request a code point that the current font doesn't have, the OS will find a suitable substitute in another font and use that.)
[
In Julia, you can only use certain Unicode characters for variable names. Julia wouldn't allow anything from the private use area anyway, unless some fonts were distributed to every computer and everyone agreed on who had which Unicode point. (Mathematica makes extensive use of PUA symbols in their notebooks, because they can and do install their own fonts, and can then access various glyphs from the PUA in the notebook with guaranteed results.)
You are allowed to use emoji characters as variable names, so you could try the Emoji apple, rather than the Apple apple:
One of my references in Bibdesk contains some latin/Greek character e.g. 'β'. I am getting the error while using the reference in TEXMAKER:
"! Package inputenc Error: Unicode char \u8:β not set up for use with LaTeX."
How can I set it up to work?
Though with inputenc TeX can read all the unicode characters, it doesn't know what to do with most of them, except those in the usual ascii range. I once also had a problem with that, when I wanted to copy some unicode text verbatim into one of my TeX documents, and that text contained symbols like alpha, or other math symbols.
The solution to that is the command \DeclareUnicodeCharacter{#1}{#2} where in #1 you have to put the unicode value of the character and in #2 a tex expression, that gets inserted, when character code #1 is encountered. E.g. for the beta you could use \DeclareUnicodeCharacter{03B2}{\ensuremath{\beta}}, because 03B2 is the unicode character value for the symbol "beta" (you have to look those things up in a Unicode table).
I've also written a tex package for that, if you're interested. It can be found on github at https://github.com/ezander/utf8math. See especially this file here: https://github.com/ezander/utf8math/blob/master/utf8math.sty
Another solution is to use XeTeX, which is more suitable than most other TeX engines for unicode : replace the lines
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
with
\usepackage{fontspec}
There are a lot of programmers editors that claim to support unicode / utf-8. I've tried a number of them (UltraEdit, jedit, emedit) but none of them tell you how to actually enter unicode characters into a file. Some of them tell you how to change the default file encoding to utf-8 or how to select a font that has good support for utf-8, but not how to enter utf-8 into a file using their editor.
The Go language (and some others) support utf-8 and I like the idea of using the actual utf-8 symbols for variables instead of variables with names like omega. I haven't found a programmers editor yet that actually allows you to do this, though.
The only editor / word processor that I've found that lets you how to enter unicode is Microsoft Word. Type the unicode and Alt+X and Word converts it. To get the Greek letter omega type "03c9" followed by Alt+X. UltraEdit will let you copy utf-8 from a web page into it, but their docs don't say how to actually enter utf-8 in a file, and their tech. support people don't know either.
This should be simple, but seems to be completely undocumented. Is there some key combination convention the lets you enter unicode into these editors that supposedly support unicode the way that Ctrl-F is widely used for search?
Thanks.
The standard programmer’s editor vim(1) supports limited Unicode input even if your operating system should be too broken to do so (are there any such, still?).
Just enter ^VuXXXX, where XXXX represents exactly four hex digits.
That will allow you to enter the ~6% of Unicode allocated to the Basic Multilingual Plane. The rest are forbidden to you.
This may be fixed in a newer release.
Otherwise, just use your mouse.
A few techniques I use if an editor is lacking:
Use the Windows charmap.exe utility to select characters and paste into a document.
Install an input method editor (IME) to write in a particular language.
Windows ALT keycodes.
Better to set your keyboard to generate Unicode characters across all Windows applications than to rely on a single application's custom input feature IMO.
Use the EnableHexNumpad feature and you can type any character in the Basic Multilingual Plane using Alt+numbad-plus,hexcode. (May not be of much use on a laptop without a numpad though.)
Or if there are particular characters you want to type a lot, find a keyboard layout that allows you to type them directly. For example eurokb might cover it, or you can make your own with MSKLC.
Old question, but you can type a lot of unicode in GNU Emacs or Vim
GNU Emacs: M-x set-input-method RET tex (or C-x RET C-\ tex) will let you type \omega to generate ω
Vim: Vim digraphs can generate unicode; C-k w * in insert mode gives you ω.
deceze hit the nail on the head. (S)he just didn't elaborate. bobince gave a bit more.
And I'm hazarding a guess that you're a developer or tester working on L14N or I18N. I'm also guessing you need to do more than just a few characters here or there, or you'd be satisfied with pasting from another app. So, I'll share some advice. (note: here, "you" refers to the next person to look here. I'm sure the original poster doesn't care anymore by now. :-))
If you're on Windows 10, install an appropriate keyboard driver that lets you input the characters you want into any application. I'm sure Linux has support for the same sort of thing.
E.g. I'm teaching myself Hindi (हिंदी), so I installed Windows' Hindi (Devanangari) support. I typed "Hindi", in Hindi using that support, then I switched back to US English to do the rest of this post. If all you need are accented characters from Western European languages, you can install the INTL English support and type directly in español or français or whatever.
Don't look at entering Unicode characters as entering some sort of special data amidst your English text. It's just someone else's language. Use their keyboard. Type their language.
I'm writing a flashcard app to help my learning. I'm using the Hindi keyboard support to type characters into Word, WordPad, Excel, and the Visual Studio editor. And that Hindi keyboard support works exactly the same way in all of those apps, as I'd expect it to work in just about any text editor that supports Unicode. And as you saw above, it also works in a simple text edit control in Chrome. No copy and paste. No remembering special codes. It's as ubiquitous as ctrl-F.
It looks like the unicode support in programmers editors (except for some Microsoft products) is mostly read-only. They can open a file with unicode and display the characters, but typing unicode into a file is a different story. If you want to enter unicode in a programmers editor you can copy it from somewhere else (a web page or Microsoft Word or Notepad) and paste it into the editor, but the editors make typing unicode difficult or impossible.
UltraEdit tech support referred me to this web page which explains a lot. Unfortunately none of the solutions worked with UltraEdit.
Microsoft Word and Notepad support unicode entry. Type the unicode value followed by Alt+X and it converts the hexadecimal and displays it. You can then copy and paste it into UltraEdit or one of the other programmers editors. As others have mentioned unicode support depends on support within the operating system as well as the editor.
What got me interested in using unicode in source code files is Mark Summerfield's book Programming in Go. He includes an example .go file that uses unicode. It would be great to use unicode Greek characters for variable names instead of variables named "omega" or "theta".
Using unicode in source code is a bad idea, however. Support for unicode in programmers editors is lousy, and developers would have to save or convert their source code files to utf-8 instead of ASCII. Developer's tools are just not ready to write code in unicode no matter how neat the idea sounds.
Does anyone knows how to generate Chinese characters using Postscript or related tools? I'd like to use unicode to represent Chinese characters but it seems that Postscript doesn't support unicode, yet. In addition, I'd like to specify several fonts to generate the same character.
Thus, I have two questions:
1. how to use unicode in Postscript? Or how to enumerate Chinese Character set in the postscript way?
2. How to specify the fonts configurations using Postscript?
At last, in case postscript cannot do this job, what tools should I turn to for my purpose?
Thank you very much!
-Jin
In Adobe's official PostScript language specification there is no specific support for Unicode fonts. (And this is the final version of the spec for PS Level 3, valid since its publication in 1999 -- PostScript as a language is no longer developed...)
However, PostScript supports (since Level 2) multi-byte fonts (2-, 3- and 4-bytes) in a generic way (see 'CID'). All PostScript fonts need an "encoding": an encoding basically is a table telling at which index position of a font which glyph description for a given character can be found. So while there are no Unicode fonts as such, there are multi-byte CID fonts which provide ranged subsets of Unicode.
Also, there are no freely re-distributable CMaps. (A CMap .) If you need a CMap, you have to derive it from the Windows codepage and the matching Adobe CMap.
If you just look for a "super-simple" method to use Unicode text strings with no need of checking for ranges, language etc.: sorry to disappoint you. There is no way. That would be a pipe dream.
Have a look at CID-keyed fonts instead. These are designed to include a large number of glyphs. (Page 364ff in PLRM)
Update: Linked to the correct page with CID font description.
These days, more languages are using unicode, which is a good thing. But it also presents a danger. In the past there where troubles distinguising between 1 and l and 0 and O. But now we have a complete new range of similar characters.
For example:
ì, î, ï, ı, ι, ί, ׀ ,أ ,آ, ỉ, ﺃ
With these, it is not that difficult to create some very hard to find bugs.
At my work, we have decided to stay with the ANSI characters for identifiers. Is there anybody out there using unicode identifiers and what are the experiences?
Besides the similar character bugs you mention and the technical issues that might arise when using different editors (w/BOM, wo/BOM, different encodings in the same file by copy pasting which is only a problem when there are actually characters that cannot be encoded in ASCII and so on), I find that it's not worth using Unicode characters in identifiers. English has become the lingua franca of development and you should stick to it while writing code.
This I find particularly true for code that may be seen anywhere in the world by any developer (open source, or code that is sold along with the product).
My experience with using unicode in C# source files was disastrous, even though it was Japanese (so there was nothing to confuse with an "i"). Source Safe doesn't like unicode, and when you find yourself manually fixing corrupted source files in Word you know something isn't right.
I think your ANSI-only policy is excellent. I can't really see any reason why that would not be viable (as long as most of your developers are English, and even if they're not the world is used to the ANSI character set).
I think it is not a good idea to use the entire ANSI character set for identifiers. No matter which ANSI code page you're working in, your ANSI code page includes characters that some other ANSI code pages don't include. So I recommend sticking to ASCII, no character codes higher than 127.
In experiments I have used a wider range of ANSI characters than just ASCII, even in identifiers. Some compilers accepted it. Some IDEs needed options to be set for fonts that could display the characters. But I don't recommend it for practical use.
Now on to the difference between ANSI code pages and Unicode.
In experiments I have stored source files in Unicode and used Unicode characters in identifiers. Some compilers accepted it. But I still don't recommend it for practical use.
Sometimes I have stored source files in Unicode and used escape sequences in some strings to represent Unicode character values. This is an important practice and I recommend it highly. I especially had to do this when other programmers used ANSI characters in their strings, and their ANSI code pages were different from other ANSI code pages, so the strings were corrupted and caused compilation errors or defective results. The way to solve this is to use Unicode escape sequences.
I would also recommend using ascii for identifiers. Comments can stay in a non-english language if the editor/ide/compiler etc. are all locale aware and set up to use the same encoding.
Additionally, some case insensitive languages change the identifiers to lowercase before using, and that causes problems if active system locale is Turkish or Azerbaijani . see here for more info about Turkish locale problem. I know that PHP does this, and it has a long standing bug.
This problem is also present in any software that compares strings using Turkish locales, not only the language implementations themselves, just to point out. It causes many headaches
It depends on the language you're using. In Python, for example, is easierfor me to stick to unicode, as my aplications needs to work in several languages. So when I get a file from someone (something) that I don't know, I assume Latin-1 and translate to Unicode.
Works for me, as I'm in latin-america.
Actually, once everithing is ironed out, the whole thing becomes a smooth ride.
Of course, this depends on the language of choice.
I haven't ever used unicode for identifier names. But what comes to my mind is that Python allows unicode identifiers in version 3: PEP 3131.
Another language that makes extensive use of unicode is Fortress.
Even if you decide not to use unicode the problem resurfaces when you use a library that does. So you have to live with it to a certain extend.