How to use itext-asian library in eclipse? - itext

My Code:
public static final String[] tempString = { "KozMinPro-Regular.otf", "UniJIS-UCS2-H", pharseString };
bf = BaseFont.createFont(tempString[0], tempString[1], BaseFont.NOT_EMBEDDED);
Result:
java.nio.charset.UnsupportedCharsetException: UniJIS-UCS2-H
at java.nio.charset.Charset.forName(Unknown Source)
at com.itextpdf.text.pdf.PdfEncodings.convertToBytes(PdfEncodings.java:186)
at com.itextpdf.text.pdf.TrueTypeFont.<init>(TrueTypeFont.java:376)
at com.itextpdf.text.pdf.BaseFont.createFont(BaseFont.java:705)
at com.itextpdf.text.pdf.BaseFont.createFont(BaseFont.java:621)
at com.itextpdf.text.pdf.BaseFont.createFont(BaseFont.java:456)
at de.vogella.itext.write.Main.addTextJapanese(Main.java:145)
at de.vogella.itext.write.Main.addContent(Main.java:134)
at de.vogella.itext.write.Main.main(Main.java:254)
My project:
Please see link : http://upanh.in/Cmk
Do you have any suggestions how to fix bug that?

You are blaming your tools for your own mistake.
This doesn't make sense:
public static final String[] tempString = { "KozMinPro-Regular.otf", "UniJIS-UCS2-H", pharseString };
bf = BaseFont.createFont(tempString[0], tempString[1], BaseFont.NOT_EMBEDDED);
Either you have a font program named KozMinPro-Regular.otf, or you want to use the font KozMinPro-Regular.
If you have a file named KozMinPro-Regular.otf, you don't need the iText-Asian.jar. Just use the font file with an encoding that is supported by that font program. UniJIS-UCS2-H is not supported by that OpenType font.
If you want to use CJK fonts (the fonts that are not embedded and require a font pack in Adobe Reader), you should use KozMinPro-Regular (without the otf).
In short: you are confronted with a bug in your code, not with a bug in the tools you are using. You may want to phrase your questions differently in the future.

Related

Java iText - special characters not being displayed

I'm trying for quite some time now to generate a PDF in Java using itextpdf (com.itextpdf kernel,layout,form,pdfa) with text containing special characters (äöüß). I tried several things in different variations, like loading a TTF file and setting the encoding:
FontProgram fontProgram = FontProgramFactory.createFont( "font/FreeSans.ttf") ;
PdfFont font = PdfFontFactory.createFont( fontProgram, "UTF-8" ) ;
document.setFont( font );
This way it just doesn't display special characters at all.
This doesn't work either:
var font = PdfFontFactory.createFont(StandardFonts.HELVETICA, PdfEncodings.UTF8);
document.setFont( font );
I haven't found any solution to this and the official tutorials don't seem to have a solution.
Other encodings just render placeholder characters.
this is how I add the text:
PdfWriter writer = new PdfWriter(filename);
PdfDocument pdf = new PdfDocument(writer);
Document document = new Document(pdf);
Paragraph p = new Paragraph("äüöß");
document.add(p);
document.close();
edit: I just realized that it works when I load the text from elsewhere like an input field, instead of passing a normal string. How can I make this work with hardcoded strings?
I tried re-encoding the string as described here: https://www.baeldung.com/java-string-encode-utf-8
but none of these methods work either. It always shows wrong characters.
PdfFont freeUnicode = PdfFontFactory.createFont("font/FreeSans.ttf", PdfEncodings.IDENTITY_H);
String rawString = "äöüß1234'";
byte[] bytes = StringUtils.getBytesUtf8(rawString);
String utf8EncodedString = StringUtils.newStringUtf8(bytes);
document.add(new Paragraph().setFont(freeUnicode)
.add(utf8EncodedString));
edit: The encoding in the source code editor is UTF-8 and I passed UTF-8 to the createFont() method, but that didn't work. When I pass CP1252 and change the source code encoding to ISO-8859-1, it shows the correct characters. Really strange how I couldn't find much information about this problem.

PiranhaCMS Localization

I'm new to this CMS library. I'm using .NET 5 and doing the localization. I write some code to change the culture like:
CultureInfo ci = new(culture);
CultureInfo.DefaultThreadCurrentCulture = ci;
CultureInfo.DefaultThreadCurrentUICulture = ci;
It worked when I use my resources files "resources.en-US.resx" or "resources.vi-VN.resx" (I set new supported CultureInfo "en-US" and "vi-VN") but when I change resources files in Piranha.Manager.Localization from "General.vi.resx" to "General.vi-VN.resx"(for matching the culture format) it did not work anymore.
And if I set new supported CultureInfo to "en" and "vi" (for matching with resources files in Piranha.Manager.Localization) it still not works.
services.Configure<RequestLocalizationOptions>(
opt =>
{
var supportedCultres = new List<CultureInfo>
{
new CultureInfo("en"),
new CultureInfo("vi")
};
opt.DefaultRequestCulture = new RequestCulture("en");
opt.SupportedCultures = supportedCultres;
opt.SupportedUICultures = supportedCultres;
});
I don't know what I'm doing is right way to setup localization for piranha or not? I can't find any tutorial for that.
Can you help me for localization for PiranhaCMS?
I really appreciate that.
Best Regard

How to make my own encoding for a file in VSCode Editor

Is it possible to have an own encoding in VSCode editor, inheritd from an exising?
class myEcoding implements utf-8
{
// changes for some codes
}
I have some files, which contains german characters like "ä ö ü" that are encoded as unicode numbers in this file.
So for example, the file conatins the following line
Pr\u00FCfsignal
While I want to edit this file with the correct german characters, it should exist on the harddisk in the form above.
This is how I want to see it in the editor
Prüfsignal
I already have a function, that can transform a string in both directions:
function translate(content: string, direction: boolean): string {
if (direction) {
content = content
.replace(/\\u00E4/g, "ä")
.replace(/\\u00F6/g, "ö")
.replace(/\\u00FC/g, "ü")
.replace(/\\u00C4/g, "Ä")
.replace(/\\u00D6/g, "Ö")
.replace(/\\u00DC/g, "Ü")
.replace(/\\u00DF/g, "ß")
.replace(/\\u00B0/g, "°")
.replace(/\\u00B1/g, "±")
.replace(/\\u00B5/g, "µ");
}
else {
content = content
.replace(/ä/g, "\\u00E4")
.replace(/ö/g, "\\u00F6")
.replace(/ü/g, "\\u00FC")
.replace(/Ä/g, "\\u00C4")
.replace(/Ö/g, "\\u00D6")
.replace(/Ü/g, "\\u00DC")
.replace(/ß/g, "\\u00DF")
.replace(/°/g, "\\u00B0")
.replace(/±/g, "\\u00B1")
.replace(/µ/g, "\\u00B5");
}
return content;
}
Can this be solved with a custom encoding, and if yes, any hints?
Is there possibly a better solution?
There has been an open feature request for some years to Provide encoding-related APIs for editor extensions:
https://github.com/microsoft/vscode/issues/824
For now you could just wrap that function in a loop that encodes all files in the working directory.

How to construct a unicode glyph key in C#

I am using FontAwesome to display glyphs in my Xamarin Android application. If I hardcode the glyph like this, where everything works fine:
string iconKey = "\uf0a3";
var drawable = new IconDrawable(this.Context, iconKey, "Font Awesome 5 Pro-Regular-400.otf").Color(Xamarin.Forms.Color.White.ToAndroid()).SizeDp(fontSize);
However, if what I have is the four letter code "f0a3" from FontAwesome's cheatsheet, stored in a string variable, I don't know how to set my iconKey variable to a value that works. Just concatenating a "\u" onto the beginning doesn't work, which makes sense since that's a Unicode escape indicator, not part of a standard string, but I don't know what to do instead. I also tried converting to and from Unicode in various random ways - e.g.
iconKey = unicode.GetChars(unicode.GetBytes("/u" + myFourChar.ToString())).ToString();
but unsurprisingly that didn't work either.
The IconDrawable is from here. The value I send becomes an input there to the Paint.GetTextBounds method and the Canvas.DrawText method.
Thanks for any assistance!
Found the answer here. Here is the code I am using, based on that post but simplified since I have only one hexadecimal character to handle:
string myString = "f0a3";
var chars = new char[] { (char)Convert.ToInt32(myString, 16) };
string iconKey = new string(chars);
var drawable = new IconDrawable(this.Context, iconKey, "Font Awesome 5 Pro-Regular-400.otf").Color(Xamarin.Forms.Color.White.ToAndroid()).SizeDp(fontSize);

Arabic problems with converting html to PDF using ITextRenderer

When I use ITextRenderer converting html to PDF.this is my code
ByteArrayOutputStream out = new ByteArrayOutputStream();
ITextRenderer renderer = new ITextRenderer();
String inputFile = "C://Users//Administrator//Desktop//aaa2.html";
String url = new File(inputFile).toURI().toURL().toString();
renderer.setDocument(url);
renderer.getSharedContext().setReplacedElementFactory(
new B64ImgReplacedElementFactory());
// 解决阿拉伯语问题
ITextFontResolver fontResolver = renderer.getFontResolver();
try {
fontResolver.addFont("C://Users//Administrator//Desktop//arialuni.ttf", BaseFont.IDENTITY_H, BaseFont.EMBEDDED);
} catch (DocumentException e) {
e.printStackTrace();
}
renderer.layout();
OutputStream outputStream = new FileOutputStream("C://Users//Administrator//Desktop//HTMLasPDF.pdf");
renderer.createPDF(outputStream, true);
/*PdfWriter writer = renderer.getWriter();
writer.open();
writer.setRunDirection(PdfWriter.RUN_DIRECTION_RTL);
OutputStream outputStream2 = new FileOutputStream( "C://Users//Administrator//Desktop//HTMLasPDFcopy.txt");
renderer.createPDF(outputStream2);*/
renderer.finishPDF();
out.flush();
out.close();
Actual PDF Result:
Expected PDF Result:
How to make arabic ligature?
If you want to do this properly (I assume using iText, since your post is tagged as such), you should use
iText7
pdfHTML (to convert HTML to PDF)
pdfCalligraph (to handle Arabic ligatures properly)
a font that supports these features (as indicated by another answer)
For an example, please consult the HTML to PDF tutorial, more specifically the following FAQ item: How to convert HTML containing Arabic/Hebrew characters to PDF?
You need fonts that contain the glyphs you need, e.g.:
public static final String[] FONTS = {
"src/main/resources/fonts/noto/NotoSans-Regular.ttf",
"src/main/resources/fonts/noto/NotoNaskhArabic-Regular.ttf",
"src/main/resources/fonts/noto/NotoSansHebrew-Regular.ttf"
};
And you need a FontProvider that knows how to find these fonts in the ConverterProperties:
public void createPdf(String src, String[] fonts, String dest) throws IOException {
ConverterProperties properties = new ConverterProperties();
FontProvider fontProvider = new DefaultFontProvider(false, false, false);
for (String font : fonts) {
FontProgram fontProgram = FontProgramFactory.createFont(font);
fontProvider.addFont(fontProgram);
}
properties.setFontProvider(fontProvider);
HtmlConverter.convertToPdf(new File(src), new File(dest), properties);
}
Note that the text will come out all wrong if you don't have the pdfCalligraph add-on. That add-on didn't exist at the time Flying Saucer was created, hence you can't use Flying Saucer for converting documents with text in Arabic, Hindi, Telugu,... Read the pdFCalligraph white paper if you want to know more about ligatures.
Greek characters seemed to be omitted; they didn’t show up in the document.
In flying saucer the generated PDF uses some kind of default
(probably Helvetica) font, that contains a very limited character set,
that obviously does not contain the Greek code page. link
I change the way to convert pdf by using wkhtmltopdf.