poEdit encoding wrong when opening po-files - encoding

I'm using poEdit at work and I'm having trouble with the encoding in po-files.
I have poEdit (version 1.8.8) and a MacBook Pro (version 10.11.4).
I'm trying to translate from english to swedish and there are old translations there. But the encoding is wrong so I cannot see the 'ä', 'å' and 'ö' characters.
UPDATED VERSION:
I use github desktop and when I am syncing them files, github desktop likes to write some code in my files. AND I am 100% sure that github desktop does it because the .po-files worked fine before syncing, and there is from no were else I could get this code from.
this is what github desktop writes in my files:
blabla
<<<<<<< HEAD
"POT-Creation-Date: 2016-07-23 00:13+0000\n"
"PO-Revision-Date: 2016-08-03 17:23+0300\n"
=======
"POT-Creation-Date: 2016-08-11 18:37+0000\n"
"PO-Revision-Date: 2016-08-11 18:37+0000\n"
>>>>>>> origin/master
blabla
Sooo, the <<<<<<< HEAD, ======= and >>>>>>> origin/master?!?!?!?!?!??!?!
Why github desktop?
And why wont poEdit work with it?
I removed it and everything works fine.
OLD VERSION:
When I open the .po-file with other applications like 'Sublime text 3' or 'textEditor' the encoding is correct (which is UTF-8) and 'ä', 'å' and 'ö' is visible. But if I open the same file with poEdit the encoding will be ISO-8859-1 and this is the output for 'ä','å,'ö':
'ä' => 'ä',
'å' => 'Ã¥',
'ö' => 'ö'.
I have tried changing the encoding in 'Catalog -> Properties -> Charset' but it does not change anything.
My question is:
How can I open .po-files with poEdit so the encoding is UTF-8.

Poedit seems to have some problem understanding the encoding of files generated with Notepad++, so you can try to add those four lines at the top of the file, before opening it with poedit:
msgid ""
msgstr ""
"Content-Type: text/plain; charset=UTF-8\n"
"X-Poedit-SourceCharset: UTF-8\n"

Catalog → Properties → Charset does change something: the file's encoding. It doesn't change the content.
Your problem is that you have a broken PO file (not created by Poedit or sane gettext tools) that lies about its encoding. It's not true that the charset is "correct" in Sublime Text, it's that it simply defaults to UTF-8. Your file, which you didn't provide, must lie about its encoding and Poedit trusts it. In other words, the good old Garbage In, Garbage Out principle.
Fix your broken PO's Content-Type header manually in a text editor to be probably(!) UTF-8 and all will be right.

I solved it, removed what github desktop placed in my code.
This:
blabla
<<<<<<< HEAD
"POT-Creation-Date: 2016-07-23 00:13+0000\n"
"PO-Revision-Date: 2016-08-03 17:23+0300\n"
=======
"POT-Creation-Date: 2016-08-11 18:37+0000\n"
"PO-Revision-Date: 2016-08-11 18:37+0000\n"
>>>>>>> origin/master
blabla
Becomes this:
blabla
"POT-Creation-Date: 2016-07-23 00:13+0000\n"
"PO-Revision-Date: 2016-08-03 17:23+0300\n"
blabla
don't need double of POT-Creation-Date and PO-Revision-Date.

Related

pg_dump and pg_restore can't deal with file name contain Arabic fonts

I use Postgres 10, and pg_dump\pg_restore that come with it.
now pg_dump and pg_restore give me an error when trying to dump\restore a file or a path that hold Arabic fonts(didn't test any font except English & Arabic)
Here's the exception for trying to restore a file name that has no English (like Arabic) characters.
.\pg_dump.exe --file "C:\א\TOC.DUMP" --host "localhost" --port "1111" --username "MyUserName" --verbose --format=c --blobs --compress "1" --schema "MySchema" "MyDBName"
System.Exception: pg_restore: [custom archiver] could not open input
file "C:\?\TOC.DUMP":
Invalid argument
And the same Exception when the file path has Arabic Fonts.
Everything works fine when using English Fonts for FileName\FilePath.
So I searched here and on google and PostgreSQL Documentation, and couldn't find a related subject except for the fact that pg_dump\pg_restore have no problem dealing with scripts that hold Arabic & Hebrew Encoding,
but nothing was mentioned about the encoding of the file name itself.
how did I solve it? I didn't.
I couldn't stop shipping my projects because of this issue.
so as a temporary workaround, I prevent the user from using Arabic fonts
with this code down here, it's not a good way of doing things you know.
// C#
BackUpPath = fileDialog.FileName;
var westernLatin = Encoding.GetEncoding(1252);
var arabic = Encoding.GetEncoding(1256);
var bytes = arabic.GetBytes(BackUpPath);
var result = westernLatin.GetString(bytes);
if (result != BackUpPath)
{
// Inform user to use English fonts for the file name and file path.
}
Appreciate any help, thanks.

jupyter notebook with unicode characters PDF converting problems

I was making a math class note with some unicode characters (Simplified Chinese, in my case) in it. And when I was trying to convert it into PDF file, it popped out 500 error. The error message reads:
...
*************************************************
("E:\Program Files (x86)\MiKTeX 2.9\tex\latex\fontspec\fontspec.sty"
("E:\Program Files (x86)\MiKTeX 2.9\tex\latex\fontspec\fontspec-xetex.sty"
("E:\Program Files (x86)\MiKTeX 2.9\tex\latex\base\fontenc.sty"
("E:\Program Files (x86)\MiKTeX 2.9\tex\latex\base\tuenc.def"))
("E:\Program Files (x86)\MiKTeX 2.9\tex\latex\fontspec\fontspec.cfg")
! Undefined control sequence.
<argument> \LaTeX3 error:
Erroneous variable \c__fontspec_shape_n_n_tl used!
l.3806 \emfontdeclare{ \emshape, \eminnershape }
?
! Emergency stop.
<argument> \LaTeX3 error:
Erroneous variable \c__fontspec_shape_n_n_tl used!
l.3806 \emfontdeclare{ \emshape, \eminnershape }
No pages of output.
Transcript written on notebook.log.
I guess the fontspec part went wrong, but I don't know how to solve it.
For your information, here is what I've done before I got the 500 error.
1.I've installed the pandoc, and I already have Miktex before;
2.I've changed the file
...\nbconvert\templates\latex\article.tplx
rewritten the article class to be ctexart;
3.I've changed the file
...\nbconvert\templates\latex\exporters\pdf.py
rewritten the latex command to be
latex_command = List([u"xelatex", u"{filename}"], config=True,
help="Shell command used to compile latex."
)
4.I've also tried this:"https://github.com/ipython/ipython/issues/7150", which tends to convert the ipynb file into latex file first, then into PDF. And this didn't work for me either. The main reason is that the config file can't be found by jupyter nbconvert command.
For your information, my OS is Win7 Ultimate x64, with Chrome for Anaconda3 jupyter notebook.
Thanks in advance for anyone who takes time to read my post. Any help would be appreciated.

Google Storage: Invalid Unicode path encountered

I'm trying to upload some files to GCS and i get this:
Building synchronization state...
Caught non-retryable exception while listing file:///media/Respaldo: CommandExce ption: Invalid Unicode path encountered
(u'/media/Respaldo/Documentos/Trabajo/Traducciones/Servicio
Preventivo Semanal Hs Rev3 - Ingl\xe9s.doc'). gsutil cannot
proceed with such files present. Please remove or rename this file and
try again. NOTE: the path printed above replaces the problematic
characters with a hex-encoded printable representation. For more
details (including how to convert to a gsutil-compatible encoding) see
`gsutil help encoding`.
But when i run:
convmv -f ISO-8859-1 -t UTF-8 -r --replace /media/Respaldo
And says all the non English files are already UTF-8. How should I proceed?
Edit: example of convmv output:
Skipping, already UTF-8: /media/Respaldo/Multimedia/Mis Imágenes/NOKIA/Memoria/Videoclips/Vídeo004.3gp
Skipping, already UTF-8: /media/Respaldo/Multimedia/Mis Imágenes/NOKIA/Memoria/Videoclips/Vídeo009.3gp
Skipping, already UTF-8: /media/Respaldo/Multimedia/Mis Imágenes/NOKIA/Memoria/Videoclips/Vídeo00133.3gp
Skipping, already UTF-8: /media/Respaldo/Multimedia/Mis Imágenes/NOKIA/Memoria/Videoclips/Vídeo023.3gp
Skipping, already UTF-8: /media/Respaldo/Multimedia/Mis Imágenes/NOKIA/Memoria/Videoclips/Vídeo026.3gp

Eclipse: UTF-8 encoding not shown correctly in Texlipse editor

I'm working on a LaTeX project in Eclipse with German text. On one of our project computers, the words with Umlauts are shown with the wrong encoding, e.g.:
Ãbersicht über
rather than:
Übersicht über
Under Window --> Preferences --> General --> Workspace the text file encoding is set to "Default (UTF-8)". Also, I have the following line at the top of the TeX file:
\usepackage[utf8x]{inputenc}
Here's the output of locale:
me#localhost:~> locale
LANG=de_DE.UTF-8
LC_CTYPE="de_DE.UTF-8"
LC_NUMERIC="de_DE.UTF-8"
LC_TIME="de_DE.UTF-8"
LC_COLLATE="de_DE.UTF-8"
LC_MONETARY="de_DE.UTF-8"
LC_MESSAGES="de_DE.UTF-8"
LC_PAPER="de_DE.UTF-8"
LC_NAME="de_DE.UTF-8"
LC_ADDRESS="de_DE.UTF-8"
LC_TELEPHONE="de_DE.UTF-8"
LC_MEASUREMENT="de_DE.UTF-8"
LC_IDENTIFICATION="de_DE.UTF-8"
LC_ALL=
Here are the TeXlipse settings:
#TeXlipse project settings
#Tue Apr 29 11:10:19 CEST 2014
markTmpDer=true
builderNum=2
outputDir=
makeIndSty=
bibrefDir=
outputFormat=pdf
tempDir=tmp
mainTexFile=main.tex
outputFile=out.pdf
langSpell=de
markDer=true
srcDir=
On the other computers it works fine. If I open the file in another editor (e.g. Kile, vi, etc.) the umlauts are also shown correctly. What's the problem here?
I had the same problem and could resolve by accessing "Project properties > Resource" and, in the groupbox "Text file enconding", selecting "Other (UTF-8)", instead of "Inherited from container" (whose value was "Cp1252").

Ajax.InPlaceEditor.prototype: spaces are encoded to '+' after upgraded prototype to 1.7.1

I use InPlaceEditor in my code and it worked fine with prototype version 1.6.0.3, but after I upgraded to 1.7.1. All the spaces in the text field are replaced with "+"'s after being saved. What I can do to "disable" InPlaceEditor's text encoding?
The problem here is that what if I have a string "I have a dog + a cat + a parrot". It would be sent to the server as "I+have+a+dog+++a+cat+++a+parrot"