ResourceBundle loading ISO-8859-1 characters incorrectly - encoding

I have a following test_fi.properties file under my project, where I have special characters that are visible properly in IntelliJ. For Chinese characters for instance it properly warns that the selected encoding ISO-8859-1 doesn't support them.
However the following code block:
final ResourceBundle resourceBundle = ResourceBundle.getBundle("translations/test", new Locale("fi"));
String preFormattedMessage = resourceBundle.getString("testing.stuff");
System.out.println("Message is: " + preFormattedMessage);
Is then executed with printing the line as following
Message is: testing some special stuff here ��� ��� ���
What might be wrong with loading the resource bundle as ISO-8859-1 characters aren't shown properly? For the record I'm using Java11 OpenJDK.
java --version
openjdk 11.0.15 2022-04-19 LTS
OpenJDK Runtime Environment Zulu11.56+19-CA (build 11.0.15+10-LTS)
OpenJDK 64-Bit Server VM Zulu11.56+19-CA (build 11.0.15+10-LTS, mixed mode)

Related

Unable to load Aspera Library in Ubuntu

I am trying to load files from on-prem to IBM Cloud Object Store using Aspera High Speed API. It works fine in Mac, but when the same code is run on Ubuntu 18.0.4 it gives following error.
Failed to load Aspera dynamic library from candidates
[libfaspmanager2.so] at location: /root/.aspera/cos-aspera/0.1.163682
With dependency:
<dependency>
<groupId>com.ibm.cos-aspera</groupId>
<artifactId>cos-aspera-linux-64</artifactId>
<version>0.1.163682</version>
</dependency>
Made sure below env variable is set:
a) export LD_LIBRARY_PATH=/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/amd64/server:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/amd64
b) export LD_PRELOAD=/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/amd64/libjsig.so
Using following java:
openjdk version "1.8.0_222" OpenJDK Runtime Environment (build
1.8.0_222-8u222-b10-1ubuntu1~18.04.1-b10) OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)
You can try to decompile the java code that loads the native lib. (e.g using JAD).
You would find that in:
com.ibm.cloud.objectstorage.services.aspera.transfer.AsperaLibraryLoader
Here is the method:
public static void loadLibrary(File extractedPath, List<String> candidates) {
for (String lib : candidates) {
File libPath = new File(extractedPath, lib);
String absPath = libPath.getAbsolutePath();
log.debug("Attempting to load dynamic library: " + absPath);
try {
System.load(absPath);
log.info("Loaded dynamic library: " + absPath);
return;
} catch (UnsatisfiedLinkError e) {
log.debug("Unable to load dynamic library: " + absPath, e);
}
}
throw new RuntimeException("Failed to load Aspera dynamic library from candidates " + candidates + " at location: " + extractedPath);
}
You can try to reproduce the error and get the actual message by compiling:
public class TestLoad { public static void main(String[] args) {
System.load("/root/.aspera/cos-aspera/0.1.163682/libfaspmanager2.so");
}}
On windows, similar problem due to missing dependency for the libfaspmanager2.so lib. which are fixed by setting by adding to PATH the folder where missing libs are.
On Linux check dependencies with:
ldd /root/.aspera/cos-aspera/0.1.163682/libfaspmanager2.so
If there are missing deps, then add folder path to LD_LIBRARY_PATH.
Another note:
The java lib is equipped with Apache commons logging:
https://commons.apache.org/proper/commons-logging/guide.html
So, activate logs, and get better idea.

sbt test encoding issue

I have a very similar issue as sbt test encoding hickup but since the answer does not apply and my case is in Scala code I ask here.
I have a string containing non ASCII characters in a unit test. This test is working fine on Linux, and fine on Windows when run from IntelliJ. However, when run from a Windows shell with sbt test they fail. If I print the string humanité it is displayed as humanitΘ in the failing case. The file encoding is UTF-8.
println(new java.io.InputStreamReader(System.in).getEncoding) returns UTF8 when run from IntelliJ, and Cp1252 from the shell. I tried various things to change the encoding:
run sbt "-Dfile.encoding=UTF-8" test
check that scalacOptions defined in my build.sbt contains "-encoding", "UTF8"
But the default encoding is always Cp1252 (maybe that's normal?) and the test keeps failing.
The failing code is the following:
val stringToEncrypt = "l'humanité"
println(test)
From IntelliJ I get:
l'humanité
From a windows shell running sbt:
l'humanitΘ
In order to avoid problems with default charset for OS, you can pass explicitly the desired charset when creating the InputStream:
new java.io.InputStreamReader(System.in,"UTF-8")

Tess4J: "Invalid calling convention 63" despite correct versions

I try to do OCR and output as PDF using Tess4J and the following code on Linux (Ubuntu 16 Xenial).
public void testOcr() throws Exception {
File imageFile = new File("/projects/de.conradt.core/tessdata/urkunde.jpg");
ITesseract instance = new Tesseract1(); // tried both Tesseract() and Tesseract1()
// File tessDataFolder = LoadLibs.extractTessResources("tessdata"); // Maven build bundles English data
// instance.setDatapath(tessDataFolder.getParent());
instance.setDatapath("/projects/de.conradt.core/tessdata");
instance.setLanguage("deu");
try {
String result = instance.doOCR(imageFile);
System.out.println(result);
} catch (TesseractException e) {
System.err.println(e.getMessage());
}
List<ITesseract.RenderedFormat> list = new ArrayList<ITesseract.RenderedFormat>();
list.add(ITesseract.RenderedFormat.PDF);
File pdfFile = new File("/projects/de.conradt.core/tessdata/urkunde.pdf");
instance.createDocuments(pdfFile.getAbsolutePath(), "/projects/de.conradt.core/tessdata/urkunde", list);
}
The last line
instance.createDocuments(pdfFile.getAbsolutePath(), "/projects/de.conradt.core/tessdata/urkunde", list);
throws an Exception:
11:03:12.651 [http-nio-8080-exec-1] ERROR net.sourceforge.tess4j.Tesseract - Invalid calling convention 63
java.lang.IllegalArgumentException: Invalid calling convention 63
at com.sun.jna.Native.createNativeCallback(Native Method)
at com.sun.jna.CallbackReference.<init>(CallbackReference.java:239)
at com.sun.jna.CallbackReference.getFunctionPointer(CallbackReference.java:413)
at com.sun.jna.CallbackReference.getFunctionPointer(CallbackReference.java:395)
at com.sun.jna.Function.convertArgument(Function.java:541)
at com.sun.jna.Function.invoke(Function.java:305)
at com.sun.jna.Library$Handler.invoke(Library.java:236)
at com.sun.proxy.$Proxy89.gsapi_set_stdio(Unknown Source)
at org.ghost4j.Ghostscript.initialize(Ghostscript.java:323)
at net.sourceforge.tess4j.util.PdfUtilities.convertPdf2Png(PdfUtilities.java:103)
at net.sourceforge.tess4j.util.PdfUtilities.convertPdf2Tiff(PdfUtilities.java:48)
at net.sourceforge.tess4j.Tesseract.createDocuments(Tesseract.java:535)
at net.sourceforge.tess4j.Tesseract.createDocuments(Tesseract.java:507)
at de.conradt.core.Example.testOcr(Example.java:62)
at de.conradt.core.Example.ocr(Example.java:35)
I found this to be a known (but supposedly closed) issue with Tess4J:
https://github.com/nguyenq/tess4j/issues/35
https://sourceforge.net/p/tess4j/discussion/1202294/thread/2a25344c/
https://github.com/zippy1978/ghost4j/issues/44
but I checked my versions as well as the TESSDATA_PREFIX env variable. It's all set correctly as far as I can see.
Tesseract and Leptonica version:
$ /usr/bin/tesseract --version
tesseract 3.04.01
leptonica-1.73
libgif 5.1.2 : libjpeg 8d (libjpeg-turbo 1.4.2) : libpng 1.2.54 : libtiff 4.0.6 : zlib 1.2.8 : libwebp 0.4.4 : libopenjp2 2.1.0
Ghostscript version: (this is the latest version I get via apt-get)
$ ghostscript -v
GPL Ghostscript 9.18 (2015-10-05)
Copyright (C) 2015 Artifex Software, Inc. All rights reserved.
Tess4j version:
3.2.1
and the TESSDATA_PREFIX (the config files etc. are under /projects/de.conradt.core/tessdata):
$ echo $TESSDATA_PREFIX
/projects/de.conradt.core
Looking at the Release log of Tess4j: http://tess4j.sourceforge.net/changelog.html, I should be using the correct version stack.
Especially version 3.2 in the change log says:
Version 3.2 - 15 May 2016: Revert JNA to 4.1.0 due to "Invalid calling
convention 63" errors invoking GhostScript via Ghost4J on Linux
so I thought I should be safe with 3.2.1.
Do I need to manually set anything about JNA? From my understanding, this had been fixed in 3.2.0 for Linux explicitly.
Ok, I didn't explicitly reference JNA anywhere in my project pom, I thought this is all done in Tess4J 3.2.1 and its pom.xml. I added JNA 4.1.0 as a dependency in my own pom.xml now as well and this seems to solve the problem.
<dependency>
<groupId>net.java.dev.jna</groupId>
<artifactId>jna</artifactId>
<version>4.1.0</version>
</dependency>

Using j2pkcs11.dll with java 8 (64-bit) on windows 7 (64-bit)

I trying to use the j2pkcs11.dll (packaged with jdk1.8.0 - 64bit) to access certificates stored on a smartcard but not unable to make it work.
--- sample code to add the SunPKCS11 provider dynamically ---
String pkcs11ConfigSettings = "name = " + "TestSmartCard" + "\n" + "library = " + "C:/jdk1.8.0_11/jre/bin/j2pkcs11.dll";
byte[] pkcs11ConfigBytes = pkcs11ConfigSettings.getBytes();
ByteArrayInputStream confStream = new ByteArrayInputStream(pkcs11ConfigBytes);
Provider p = new sun.security.pkcs11.SunPKCS11(confStream);
---- the exception I get ---
java.security.ProviderException: Initialization failed
at sun.security.pkcs11.SunPKCS11.<init>(SunPKCS11.java:376)
at sun.security.pkcs11.SunPKCS11.<init>(SunPKCS11.java:103)
at scpoc.SmartCard.main(SmartCard.java:28)
Caused by: java.io.IOException: The specified procedure could not be found.
at sun.security.pkcs11.wrapper.PKCS11.connect(Native Method)
at sun.security.pkcs11.wrapper.PKCS11.<init>(PKCS11.java:138)
at sun.security.pkcs11.wrapper.PKCS11.getInstance(PKCS11.java:151)
at sun.security.pkcs11.SunPKCS11.<init>(SunPKCS11.java:313)
JEP 131 claims to have PKCS11 support in Java 8 (http://openjdk.java.net/jeps/131) but I have not been able to get it to work on windows 7 using the java8 64-bit. Note: I also tried the java 8 32-bit on windows 7 - but no luck either.
Has anyone had any success using the SunPKCS11 provider with java 8 (Windows 7)?
SunPKCS11 provider is present only in 32-bit Windows version of JRE up until JRE7. Since JRE8 it is present also in 64-bit Windows version of JRE. This is the information you see in JEP131.
If you need to use PKCS#11 API in 64-bit Windows version of JRE older than JRE8 then you will have to use one of alternative 3rd party implementations - such as IAIK-JCE.
I have also noticed in your code sample that you are trying to directly use "j2pkcs11.dll" as a PKCS#11 library which is wrong because it is just a JNI wrapper sitting between JRE and the library implementing PKCS#11 interface. Instead of loading "j2pkcs11.dll" you need to load PKCS#11 library provided by your smartcard or HSM vendor.

JProfiler> ERROR unknown frame type in StackMapTable Exception and core dumped

When I start the Jprofiler v7.2.3 in instrumentation mode for my application, system crashes and throws below hs_err_pid.log and at times core dumps too
Internal Error (jvmtiRedefineClasses.cpp:2312), pid=19786, tid=52
Error: ShouldNotReachHere()
Also I get below entry in nohup.out
JProfiler> ERROR unknown frame type in StackMapTable attributeAbort
Application details
Java version "1.6.0_23"
Java HotSpot(TM) 64-Bit Server VM (build 19.0-b09, mixed mode)
jboss-5.1.0
sun4v sparc SUNW,T5140 sunOS
Please suggest the possible root cause and fix. Thanks
Summing up the comments: In this case, another agent has produced an invalid StackMapTable attribute in the class file of an instrumented class and JProfiler did not know how to deal with that.