Why does not browser support unicode characters typed in netbeans? - netbeans

I am using netbeans 7.4 for PHP programming. I have a web form and need to insert a non-English language (Sinhalese)to the interface. I have installed various fonts of this language in my PC and my browser (firefox) renders these fonts properly, because I have viewed local websites using the browser.
Netbeans shows this font as squares and when I run it in the browser something like this කොහොමà·à¶ºà·’is displayed. (Not squares). What is the reason for this? I really do not want to netbeans to show those characters. If the browser can render them, that would be enough.

Answering my own question :)
If you want to display unicode in your browser, you have to include below meta under <head> tag of your html part. Otherwise it won't render non-English content. This worked for me, but netbeans still shows squares for non-English context. I don't mind it since I am using non-english only for user interfaces
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
Hope this will help a future reader

Related

Badly displayed accented characters

On a CMS TYPO3 website I created 15-20 years ago and have been updating regularly, a few days ago, noticed a new problem: that the accented characters were all displayed badly. I hadn't changed or updated anything for a few weeks. How do I get it back to how it was?
That sounds like a bad UTF-8 encoding of your database.
Normaly this should have been detected and cleared some years ago when TYPO3 switched over to UTF-8 connections in general. Up to then it was possible to use the database in any encoding and force UTF-8 usage in the connection. in this way UTF-8 characters were stored in e.g. iso-latin fields. one missing forced usage and you ended up with a scrambled page. especially if you transfered the data with a dump you could destroy your data.
As this should have been happened years ago, another problem could have occured:
Maybe your (updated?) browser can't decide which encoding your site is using and guessed wrong.
Maybe you need to provide a proper encoding information in your HTML output.
ADD:
A proper encoding information may look like <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> or just <meta charset="utf-8"> as meta information in the HTML header.
Normaly such a header is sent by TYPO3 by default.

Phantomjs text symbols not displayed with html to pdf

Hi I'm running phantomjs on centOS 6 and special text symbols are not displayed in the pdf output, such as ⊥ - up tack (u+22a5) and ∩ - intersection (u+2229). Phantomjs on my old server worked fine. Do I need to install special fonts on the new server?
I found my answer by adding this:
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
My reference:
https://jsreport.net/blog/national-characters-in-phantom-pdf-recipe

Name of uploaded files in unicode

I´m having problems getting correct names of files uploaded to a NancyFx web.
I´m Spanish and we have no common characters like
ñ á é í ó ú... in uppercase and many more.
When I pick the file already uploaded from this.Request.Files.FirstOrDefault().Name then the names are always bad encoded.
I tried a lot of transformations with no success.
Any suggestions are highly appreciated.
Does your HTML page contain a
<META http-equiv="Content-Type" content="text/html; charset=utf-8">
within the <HEAD> element?
I have same experience with Korean file name.
And after some more googling, I found this nancyfx github issue: https://github.com/NancyFx/Nancy/issues/1850
It's fixed bug. (but I am using nancy 0.x version, so it did not helped me.)

why "»" shows as a question mark("?") in my page?

Is there any restrictions for it to show normally?
Sounds like an encoding problem. For special characters like that, I prefer to use HTML entities. In this case, try »
After my experience, a question mark usually replaces undecodable special characters when you encode your special characters with utf8, because web browsers by default decode the web page using iso-latin1. You can/should explicitely declare the encoding of your web page using the following directive:
<?xml version="1.0" encoding="UTF-8" ?>
for xhtml, or
<meta http-equiv="Content-Type" content="text/html"; charset="utf-8">
(inside the element), for HTML.
Regard this post as a supplement, because I guess that using the xml/html entities like » or » mentioned above are the better way to go.
You can also use »
If your Apache server is configured with...
AddDefaultCharset UTF-8
...in the httpd.conf file (which, strangely, was the default on my server), then Content-Type specs in the .html files (e.g., <meta http-equiv=Content-Type content="text/html; charset=windows-1252">) will be ignored, causing character codes above 127 to be interpreted incorrectly.
Comment out the AddDefaultCharset line and restart Apache.

iPhone's Mobile Safari: Special Characters

The iPhone app I'm working on uses html help files and special characters such as
ü and ê
are being mangled my iPhone's mobile Safari. Anything I can do to correct this?
If you're using XHTML, ensure that the content of your files really is the encoding specified in the doctype. If you're using just plain HTML, consider using XHTML instead, or
Use HTML entities (e.g. é)
Use the META tag to specify an encoding
Have you tried using numerical character references? Alternatively, perhaps you can use a <meta http-equiv="content-type" ... element. Also, maybe there's a better way to tell mobile Safari the character encoding of HTML files (equivalent to the server's HTTP Content-Type header)