Encoding of file names in Java - encoding

I am running a small Java application on an embedded Linux platform. After replacing the Java VM JamVM with OpenJDK, file names with special characters are not stored correctly. Special characters like umlauts are replaced by question marks.
Here is my test code:
import java.io.File;
import java.io.IOException;
public class FilenameEncoding
{
public static void main (String[] args) {
String name = "umlaute-äöü";
System.out.println("\nname = " + name);
System.out.print("name in Bytes: ");
for (byte b : name.getBytes()) {
System.out.print(Integer.toHexString(b & 255) + " ");
}
System.out.println();
try {
File f = new File(name);
f.createNewFile();
} catch (IOException e) {
e.printStackTrace();
}
}
}
Running it gives the following output:
name = umlaute-???
name in Bytes: 75 6d 6c 61 75 74 65 2d 3f 3f 3f
and file called umlaute-??? is created.
Setting the properties file.encoding and sun.jnu.encoding to UTF-8 gives the correct strings in the terminal, but the created file is still umlaute-???
Running the VM with strace, I can see the system call
open("umlaute-???", O_RDWR|O_CREAT|O_EXCL|O_LARGEFILE, 0666) = 4
This shows, that the problem is not a file system issue, but one of the VM.
How can the encoding of the file name be set?

If you are using Eclipse, then you can go to Window->Preferences->General->Workspace and select the "Text file encoding" option you want from the pull down menu. By changing mine around, I was able to recreate your problem (and also change back to the fix).
If you are not, then you can add an environmental variable to windows (System properties->Environment Variables and under system variables you want to select New...) The name should be (without quotes) JAVA_TOOL_OPTIONS and the value should be set to -Dfile.encoding=UTF8 (or whatever encoding will get yours to work.
I found the answer through this post, btw:
Setting the default Java character encoding?
Linux Solutions
-(Permanent) Using env | grep LANG in the terminal will give you one or two responses back on what encoding linux is currently setup with. You can then set LANG to UTF8 (yours might be set to ASCII) in the /etc/sysconfig i18n file (I tested this on 2.6.40 fedora). Bascially, I switched from UTF8 (where I had odd characters) to ASCII (where I had question marks) and back.
-(on running the JVM, but may not fix the problem) You can start the JVM with the encoding you want using java -Dfile.encoding=**** FilenameEncoding
Here is the output from the two ways:
[youssef#JoeLaptop bin]$ java -Dfile.encoding=UTF8 FilenameEncoding
name = umlaute-הצ�
name in Bytes: 75 6d 6c 61 75 74 65 2d d7 94 d7 a6 ef bf bd
UTF-8
UTF8
[youssef#JoeLaptop bin]$ java FilenameEncoding
name = umlaute-???????
name in Bytes: 75 6d 6c 61 75 74 65 2d 3f 3f 3f 3f 3f 3f 3f
US-ASCII
ASCII
Here is some references for the linux stuff
http://www.cyberciti.biz/faq/set-environment-variable-linux/
and here is one about the -Dfile.encoding
Setting the default Java character encoding?

I know it's an old question but I had the same problem.
All of the mentioned solutions did not work for me, but the following solved it:
Source encoding to UTF8 (project.build.sourceEncoding to UTF-8 in maven properties)
Program arguments: -Dfile.encoding=utf8 and -Dsun.jnu.encoding=utf8
Using java.nio.file.Path instead of java.io.File

Your problem is that javac is expecting a different encoding for your .java-file than you have saved it as. Didn't javac warn you when you compiled?
Maybe you have saved it with encoding ISO-8859-1 or windows-1252, and javac is expecting UTF-8.
Provide the correct encoding to javac with the -encoding flag, or the equivalent for your build tool.

Related

How do I print UTF-8 Character in Weblogic 12c Console?

String fullBriefDescription = "ຈົວ昨夜, 最高コンサ";
System.out.println("fullBriefDescription Line 1094 "+fullBriefDescription);
System.out.println("fullBriefDescription Length Line 1095 "+fullBriefDescription.length());
I have created a java file and run through Weblogic 12c as a Part of the project, but when I run the project and print output why I am getting "????? ???" instead of the above characters??
Please Help

How I can delete an applet from my java card?

I installed an applet on my javacard! and now I want to delete it. How I can do it?
I try "Reset Card" button in jcmanager and also gpj -delete AID but it not deleted! I also try gpshell but it don't work too. the output of gpj shown below:
note: I download the .cap file from here [section:Java Card Applet for read and write data] and here is its source code.
You are trying to delete the executable load file without deleting the applet instance installed from it. Either change the parameter P2 of the DELETE APDU to 0x80 (delete object and related object). GPJ does this if you pass the argument -deletedeps:
gpj -deletedeps -delete D2760000605002
Alternatively delete the applet instance (AID: D2 76 00 00 60 41 02) first.
DELETE command format looks ok
80E40000094F07D2760000605002
80E4000009
^^ -> CLA
^^ -> E4 :INS -> DELETE
^^ -> 00 : Reference Control Parameter P1 -> Last (or only) command
^^ -> 00 :P2 -> Delete object
^^ -> 09 :Lc -> Length of data field
4F07D2760000605002
^^ -> 4F: Tag-> Executable Load File or Application AID
^^ -> 07 : Length of Load File AID
^^^^^^^^^^^^^^ -> D2760000605002 :Load File AID
It looks Load File AID does not belong to applet which has been tried to delete.

Can I read Cyrillic (Russian) characters from the CLI in Groovy?

I have a Groovy script that takes user input from the CLI. The CLI supports Cyrillic characters and both the encoding and charset are in UTF-8. Yet, when Groovy reads input with Cyrillic characters, all it sees are "???????". Additionally, Groovy cannot create a directory or file with the given parameter. Does anyone have any ideas on forcing Groovy to accept the Cyrillic characters? Thanks.
Ensure the reader you're using is using the same encoding as your CLI. If they are, it could be a problem displaying the characters instead. You can verify the Unicode codepoints that groovy is getting like this:
// test.groovy
def input = System.in.withReader('UTF-8') { it.readLine() }
input.eachWithIndex { ch, index ->
println "$ch, ${Character.codePointAt(input, index)}"
}
Run this from the CLI:
$ echo $LANG
en_US.UTF-8
$ echo Здра́вствуйте | groovy test.groovy
З, 1047
д, 1076
р, 1088
а, 1072
́, 769
в, 1074
с, 1089
т, 1090
в, 1074
у, 1091
й, 1081
т, 1090
е, 1077

finding error in command line

I am trying to run some kind of programm using command line, but I got an error.
The command line is:
quantisnp2.exe --outdir D:\output\ --config "C:\Program files\QuantiSNP\params.dat" --levels "C:\Program files\QuantiSNP\levels.dat" --sampleid CNV1 --gender female --emiters 10 --Lsettings 2000000 --doXcorrect --genotypes --gcdir D:\gc\ --input-files C:\Program files\CNV1.txt
QuantiSNP:Single-file mode input find.
QuantiSNP:Processing file: C:|Program
QuantiSNP:Local CG content directory specified. Local CG content correction will be used.
??? Error using ==>textread at 167
File not found.
Error in ==> quantisnp2 at 293
The first thing I'd be looking at is the unquoted C:\Program files\CNV1.txt at the end of the command (all your other ones are quoted).
There's a good chance that's being treated as two arguments, C:\Program and files\CNV1.txt.
You may also want to check the spelling of emiters, I'm pretty certain the correct English word would be emitters though, of course, this could be a case of the QuantiSNP developers not knowing how to spell :-)

Python 3 doesn't read unicode file on a new server

My webpages are served by a script that dynamically imports a bunch of files with
try:
with open (filename, 'r') as f:
exec(f.read())
except IOError: pass
(actually, can you suggest a better method of importing a file? I'm sure there is one.)
Sometimes the files have strings in different languages, like
# contents of language.ru
title = "Название"
Those were all saved as UTF-8 files. Python has no problem running the script in command line or serving a page from my MacBook:
OK: [server command line] python3.0 page.py /index.ru
OK: http://whitebox.local/index.ru
but it throws an error when trying to serve a page from a server we just moved to:
157 try:
158 with open (filename, 'r') as f:
159 exec(f.read())
160 except IOError: pass
161
/usr/local/lib/python3.0/io.py in read(self=, n=-1)
...
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe1 in position 627: ordinal not in range(128)
All the files were copied from my laptop where they were perfectly served by Apache. What is the reason?
Update: I found out the default encoding for open() is platform-dependent so it was utf8 on my laptop and ascii on server. I wonder if there is a per-program function to set it in Python 3 (sys.setdefaultencoding is used in site module and then deleted from the namespace).
Use open(filename, 'r', encoding='utf8').
See Python 3 docs for open.
Use codecs library, I'm using python 2.6.6 and I do not use the usual open with encoding argument:
import codecs
codecs.open('filename','r',encoding='UTF-8')
You can use something like
with open(fname, 'r', encoding="ascii", errors="surrogateescape") as f:
data = f.read()
# make changes to the string 'data'
with open(fname + '.new', 'w',
encoding="ascii", errors="surrogateescape") as f:
f.write(data)
more information is on python unicode documents