Jekyll does not parse UTF-8 - encoding

I created a page in notepad and selected UTF-8 as the encoding while saving. Jekyll does not parse this page. It renders the liquid extensions in the page as they are.
Now I saved the same page using ANSI encoding. Jekyll parses that easily and my site is up and running. But it is limited only to ANSI and some characters appear as a question mark due to wrong encoding. I do not want to use ANSI instead of UTF-8 when the web fully supports it.

It may be due to the fact that Notepad inserts a byte order mark (BOM) at the beginning of UTF-8 documents, which may interfere with their processing (especially by tools that are aimed primarily at Unix). You could try using another text editor (or stripping out the BOM with another tool may work).

Related

How to make a GitHub README.md render

I have a README.md here but it is not showing up as rendered Markdown, it just shows the raw text. Does anyone know what I'm doing wrong here?
https://github.com/slothdude/soundcloud-groupme-bot/blob/master/README.md
There's no way to reliably detect a file's encoding. At the end of the day, it's a guessing game.
That particular file is stored in some strange encoding. Some editors (e.g. Emacs) seem to mostly open it successfully (though there are a few strange characters that might be whitespace), but don't know what it is. When I ask Emacs what encoding it's using I get no-conversion, which isn't very helpful.
Others, like Gedit, show what looks like a mixture of kanji and rectangular symbols suggesting unknown values.
Tools like file and enca seem to have no idea what it is:
$ file README.md
README.md: data
$ enca README.md
enca: Cannot determine (or understand) your language preferences.
Please use `-L language', or `-L none' if your language is not supported
(only a few multibyte encodings can be recognized then).
Run `enca --list languages' to get a list of supported languages.
Open it in a decent text editor (ideally the one you've used to author it) and save it as UTF-8, then commit that change. I suspect that this will fix its rendering on GitHub.

Keep file encoding in eclipse for each file (different encodings for different files)

I'm working with a git repository where some of the files are encoded in latin-1 and some of them in utf-8. I'm using Eclipse CDT to work with them, and it's configured to use UTF-8 as default encoding.
The thing is, when I open latin-1 encoded files, some of the characters are not shown properly , and despite I've just tried also the Luna version, which came out 2 days ago, the problem persists (It's supposed that latin-1 and latin-2 are supported now, according to the review information).
Furthermore, and here comes the real trouble, when I modify and save latin-1 encoded files, they are being saved as UTF-8 (as configured in Eclipse), so if I push these changes to the repository, quite a lot of conflicts will emerge, messing up the entire commit.
Is there some way of telling Eclipse to keep the original encoding for each file?
Thank you.

Eclipse .properties file disable escaping of UTF-8 characters

I'm using *.properties files in my java/android applications for my translations files. My problem is that .properties files in eclipse escape utf-8 characters that are out of the ISO-8859-1 charset so I see the escaped characters. So I decided to make my own library that reads the file in utf-8 format. BUT eclipse still escapes characters. Is there any way to make eclipse handle *.properties files as normal text files??
Right Click on the file, properties. Under "resource" tab , check for "Text File Encoding" at the bottom right and change that to UTF-8.
Don't call them .properties files, give them another file extension and they will be handled by the text editor only, instead of the properties file editor.
Even without the editing issue you should not call them .properties, as they are not compliant to the Java properties file standard, which might confuse other developers on that project, other tools and so on.
The best solution however is yet another one: Throw away your selfmade implementation and get yourself a better editor for properties files, which shows you the characters as you want to read them, independent of how they are encoded in the file.

Eclipse turns Japanese into garbage during refactoring

I have several Java files that have Japanese strings in them, and are encoded in UTF-8. I use Eclipse. However, whenever Eclipse touches them in any automated way, it turns the Japanese into garbage. A good example of this is JAWJAW, the Java Japanese WordNet interface. You can see the code on the website with Japanese characters in it. If you load the project into Eclipse, though, everything will fail because the characters are garbled (bakemoji).
Does anyone know how to fix this?
What is the default encoding for your project?
Future version of Eclipse (like e4) could be set by default to UTF-8, which would avoid any automatic conversion into "garbage".
See bug 108668 for more on that reflexion:
No solution will be perfect. However in the long term I think the current platform specific approach is clearly inferior to a platform-independent UTF-8 default.
+1 UTF-8 should be the obvious default character set for all text files, I had
a problem with eclipse when I was using an English Windows XP system and trying
to open a file in eclipse with Chinese characters, as you can imagine the
display is completely messed up and eclipse doesn't tell me what I need to do.
I had to spend time google for answers. I had to put -Dfile.encoding=UTF-8 in
eclipse.ini so that it behaves correctly.
Making UTF-8 the default is not the right solution for the problem you were
having.
+1 for embedding encoding in the character stream wherever we can (like XML, HTTP, some kinds of file systems).
Encoding is meta-info for the data and belongs to the data, not to a separate user-changeable setup.
The primary reason for this cause is - the unicode supported font is missing from the system fonts. So do the following things to get it done.
Download Arial Unicode MS font and put it inside windows->fonts
directory in windows.
Change the default text encoding in eclipse to UTF-8 by navigating to
Window->Preferences->General->Workspace->Text File encoding
->Other->UTF-8
set Arial Unicode MS font to the Text Font attribute by navigating to
Window->Preferences->General->General->Appearance->colors and
Fonts->Basic->Text Font (select it)->Edit

looking for a UTF-8 text editor

I am looking for a (simple) text editor that can handle text in different encodings in the same document.
I need to develop some sites with mixed Japanese and English text and the editors I have now (on an English Windows system) are unable to display the Japanese text.
Jedit files don't display the Japanese text I have inputted but when I look at the file in a browser it shows up correctly.
Gvim shows all Japanese text in the editor as question marks and also in the browser.
In Gvim inputting the kanji works (you input the pronounciation and then press space bar to get the kanji) but when you confirm the kanji you want it replaces that kanji with question marks. (1 question mark for every kanji).
Can someone recommend me a text editor to edit html and php files that is able to display utf-8 encoded text and also save as an utf-8 file ?
thank you.
After reading about emacs I installed it. see below.
Thanks everybody for the hints.
if you don't have a unicode font yet you have to find one online or buy one.
here are the instructions to install the font on a windows system http://support.microsoft.com/kb/314960
jEdit
I changed my font in Jedit to a UTF font and now the Japanese shows up normally.
inputting the Japanese is still problematic as you don't see what you are typing.
(to change your font to edit files go to Utilities -> Global Options -> text area
select a Unicode font and you'll be able to see the Japanese characters.
gVim
I am still trying to figure out how to add a font in gvim. Once I know how to do that I ll update this.
Emacs
Emacs does not show the kanji correctly, they are displayed as ??? but at least I can see what I type in Japanese and select the right word.
so at this point I have to say that in jEdit I can see Japanese text but I can't input Japanese text. Gvim I can input Japanese text but inside the text area it is displayed as ??? and the same goes for Emacs.
adding a font in emacs and gvim is sadly enough not a trivial task.
At the moment I use notepad with the Arial unicode MS font and saving as UTF-8 file as my Japanese editor. Not ideal but at least it works.
Notepad++ is highly recommended.
Emacs correctly handles UTF-8 for me. (And of course, it can edit HTML and PHP files).
I would recommend Vim still. The problem you were seeing with questions marks is probably an issue with the font you were using. When displaying text that contains characters not in the currently language applications typically display them as empty boxes or question marks. See here for UTF-8 support in Vim.
This section of the Vim manual is also helpful, especially for setting up UTF-8 in Windows.
There is an issue with most Unicode-aware text editors: when you select a font, they stick to it. If the font does not include a glyph for a character, then the default substitution character (I believe U+FFFD, REPLACEMENT CHARACTER) is used.
In contrast, web browsers typically try to find a glyph for the characters they have to display among all the fonts provided by the system.
So, what you need, if you don't have the font "Arial Unicode MS" or similar (including Japanese glyphs), is an editor that tries to match glyphs with other fonts except the selected one.
Until someone provides a link for such an editor, I'll suggest a (somewhat extreme :) editor:
Install the latest stable python 2.x version for MS Windows (currently 2.6).
Include "idle" in the installation.
Start → Programs → Python 2,6 → Idle (Python Gui)
The "idle" editor is typically used to edit python code (and test it interactively in the Python shell). However, it can be used as a plain fully-Unicode-aware text editor, and when saving text including non-ASCII chars, it defaults to UTF-8 encoding.
Now, idle is based on Tkinter, which is an interface to tk, which is a gui library for tcl; tcl/tk, like web browsers, when asked to display a character for which no glyph is present in the widget font, it searches other fonts too.
However far-fetched this may seem, I really believe it would help; if no other solution helps you, give it a try.
Vim works fine for me as a UTF-8 text editor.
Firstly, you need a font that has the characters you are using. Choosing another text editor won't help you with this (unless it searches for other fonts for the correct characters when the font you are using doesn't have them). If you are using gVim, you can set the font like:
set guifont=Consolas
(This is not to say that Consolas is the font you want.) You probably want to put this in the .vimrc file so that it is always used.
Secondly, Vim needs to interpret the file as UTF-8, which it doesn't always automatically do. To make it do this, do:
set encoding=utf8
You can also see what encoding it is using with:
set encoding?
EmEditor is written by a Japanese company for exactly this purpose. It is a fine text editor with good performance/simplicity but pretty much all the features expected of a capable editor; I use it as my default when on the Windows platform, as well as for editing Japanese web page templates. It deserves to be better-known IMO; it is at least as good as, say, TextPad, but with full Unicode support.
Unfortunately it is not free, however you can find a free version of the old EmEditor 6 at sites such as download.com.
You can use just Notepad.exe with the "Arial Unicode MS" font (if all of your text is left-to-right, given the English windows version). Just Save as, select UTF-8.
In general, use your favourite editor with a font like "Arial Unicode MS". I mention this one because is the font with the greatest Unicode coverage I have seen,
Try BabelPad. Editing-wise, it's simple. Unicode-support-wise, it's awesome!
It sounds like maybe the problem with Jedit is the font - are you using a font that can display all the characters correctly?
To be more precise, Arial Unicode MS is a reasonable choice for a Unicode font that can display a wide range of characters across the range of languages. There are certain issues with it that can make it less than optimal for some languages used in isolation - this is why there are also language specific Unicode fonts included with Windows.
I've never had a problem with vim as long as I use a font that actually contains the characters I want. It needs to be a monospace font. :set enc=utf8 to get to utf8 mode. Then you can use :digraph command to get a display of available characters, and see how each is displayed.
To add a font, add it in Windows (Control Panel/Fonts/Add Font). If it's a monospace font, it will then show up in vin in /Edit/Font.
Just to add another one: I just checked that Programmer's Notepad 2 has some UTF-8 setting too.
(vim and emacs do just fine as well)
EditPlus seems to be an better option for UTF-8 as I have used it.
EditPad Lite and Pro fully support Unicode as of version 6. (Disclaimer: Those are my own products.)
If you get question marks, you're using an encoding that does not support Japanese characters. In EditPad, you can change the text encoding (Unicode, legacy code pages) via Convert, Text Encoding. You can set the defaults per file type in Options, Configure File Types, Encoding.
If you see squares instead of Japanese characters, select a Japanse font or Unicode font. You can do this in EditPad via Options, Font.
To type Japanese, simply install a Japanese keyboard driver in the keyboard settings in the Windows Control Panel, if you haven't already.
EditPad Pro has preconfigured file types for PHP and HTML.
Kate. and by extension, any other KDE program that uses Kate as an embedded KPart (KWrite, Quanta+, KDevelop). It handles lots of encodings, but i like to always use UTF-8. It also has a huge collection of syntax highlightings.
Try SciTE http://gisdeveloper.tripod.com/scite.html. It's just great ;)
For very basic UTF-8 multilingual text editing, I have had good luck with BabelPad (www.babelstone.co.uk): it's free, simple and robust and displays almost everything with no fuss. When the editing needs are more severe, I resort a lot to EditPad Pro, or occasionally Notepad++. For non-Unicode editing on Windows, I'm a TextPad user--my staff and I have probably spent about 200,000 hours in TextPad, with only occasional forays into NotePad2, MadEdit, jEdit, XML Copy Editor, and EPCedit. The latter two handle UTF-8 XML files well. All of the editors mentioned above are free except TextPad and EditPad Pro. Thanks to the person who suggested Emeditor. I'll try it out. --PFSchaffner
I like jEdit for it's ability to ident wrapped lines. Really nice when editing XML files. A word of warning though: It's Java, so it's not light fast, like you would expect a text editor to be.
Text codecs are fully supported. It distinguishes between text files with and without the header identifying the file format (byte order mark), calling them UTF-8 and UTF-8Y. This is something that I'm missing in other text editors.
Try EditPlus. It has specific support for HTML, syntax highlighting and can also work as a simple IDE for any compiler.
On the Mac: SubEthaEdit has excellent support for character encodings.
TextPad is a good utility too. It's a trialware, but does the job fine. See how to set char-encoding-setting-in-textpad.
For japanese, Sakura Editor is exceptional. It can display UTF-8, EUC-JP, SJIS and so on.
http://www.ultraedit.com/ is a multiplatform editor that does UTF-8 and all kinds of conversions between formats
EditPad Pro ... is recommended for u
cheers ;)