Store UTF8 character into MySql using php - mysqli

I am facing little issue about utf8 character.i know a lot of answer exist but my problem not solved.
i have a string when i echo then in browser it works fine but when i store into database then it show like
Pø¦.OÝ.Xر1⸮24ø¿ø… (browser) > Pø¦.OÃ.Xر1⸮24ø¿ø… (Database)
mysqli_report(MYSQLI_REPORT_ERROR | MYSQLI_REPORT_STRICT);
$db = new mysqli('localhost', 'xxxx', 'xxxx', 'xxxx');
if ($db->connect_errno > 0) {
die('Unable to connect to database [' . $db->connect_error . ']');
}
$db->set_charset("utf8");
How can i save utf8 character to database.I already tried
<meta charset="utf-8">
CHARACTER SET utf8 COLLATE utf8_general_ci
header('Content-Type: text/html; charset=utf-8' );
i don't understand what's the problem.is there any font issue?

your problem is your database ,change your database collation to general_ci like this
ALTER DATABASE <database_name> CHARACTER SET utf8 COLLATE utf8_general_ci;
maybe its worth to read this PHP, MySQL, characters not being displayed correctly.

Related

Some Opencart's modules doesn't support UTF8 and showing ???? instead of characters

in some cases, modules for Opencart doesn't support RTL languages and UTF8 characters and it will show ????????? characters instead of your Persian/Arabic characters.what I should do with these modules to show up my characters correctly?
there is several ways:
1) Use sql query:
In this case you can use some queries like bellow:
$this->db->query("SET NAMES 'utf8'");
$this->db->query("SET CHARACTER SET utf8;");
$this->db->query("SET character_set_connection=utf8;");
You should put these queries in your database driver file. here I am using mysqli then I should put codes in mysqli.php in this directory opencart\system\library\db\mysqli.php like bellow:
public function __construct($hostname, $username, $password, $database, $port = '3306') {
$this->link = new \mysqli($hostname, $username, $password, $database, $port);
if ($this->link->connect_error) {
trigger_error('Error: Could not make a database link (' . $this->link->connect_errno . ') ' . $this->link->connect_error);
exit();
}
$this->link->query("SET NAMES 'utf8'");
$this->link->query("SET CHARACTER SET utf8");
$this->link->query("SET character_set_connection=utf8;");
$this->link->query("SET SQL_MODE = ''");
}
2) Change database charset:
But is some cases it wont solve your problem. then you should check your database Collation for all tables and columns inside tables and you should set it to utf8_general_ci.
To do this you can use ALTER TABLE YOUR_TABLE_NAME DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci; to change character set for Tables and use ALTER TABLE YOUR_TABLE_NAME CHANGE COLUMN_NAME CHARACTER SET utf8 COLLATE utf8_general_ci; to change columns character set.
Please note, if there is so many tables and columns, you can export your file to .sql format and then open it with notepad and replace all latin1 (it's my file charset, maybe it be different in your file), to utf8 and save it and use this new database file.
3) Change file format:
In this case, you should open your file with notepad and use file menu / save as, and in save as windows change encoding to UTF-8 (it mostly help if the file is using echo or print to show some strings...
Hope it helps.

How to get a "€" (u+20AC) character in a postgres UTF8 client encoding?

I only found some strange results online where somebody tried select E'\x020AC', select E'\x020\x0AC' or select E'\x0AC\x020' but none worked.
So I had to search and read more carefully in the pg docs and found the solution:
select U&'\20AC' -- => "€"
select E'\u20AC' -- => "€"

PostgreSQL 9.0 replace function not working for one character

im working with PostgreSQl 9.0
and i have a table from which i need to replace a character with ''(blank space)
for that im using
update species set engname = replace(engname, '', '');
(this is the query image)
(image is posted)
in the case species is the table and engname is the field(character varying)..
the contens of one of the row is
" -tellifer fÂÂrthii"
even after firing the query the character is not replaced.
i have tried with
update species set sciname = regexp_replace(sciname, '', '')
but the character doesnot get replace
my database is
CREATE DATABASE myDB
WITH OWNER = Myadmin
ENCODING = 'SQL_ASCII'
TABLESPACE = pg_default
LC_COLLATE = 'C'
LC_CTYPE = 'C'
CONNECTION LIMIT = -1;
We are planning to move to UTF-8 encoding but during conversion with iconv the conversion fails because of this
so i wanted to replace the character with..
can anyone tell me how to remove that character?
this symbol can be used for more characters - so you cannot to use replace. Probably your client application uses a different encoding than database. Symbol is used to signalisation broken encoding.
Solution is using correct encoding
postgres=# select * from ff;
a
───────────────
žluťoučký kůň
(1 row)
postgres=# set client_encoding to 'latin2'; --setting wrong encoding
SET
postgres=# select * from ff; -- and you can see strange symbols
a
───────────────
�lu�ou�k� k�
(1 row)
postgres=# set client_encoding to 'utf8'; -- setting good encoding
SET
postgres=# select * from ff;
a
───────────────
žluťoučký kůň
(1 row)
Other solution is replacing national or special chars by related ascii characters
9.x has unaccent contrib module for utf or for some 8bites encoding there is function to_ascii()

Zend Framework and UTF-8 characters (æøå)

I use Zend Framework and I have problem with JSON and UTF-8.
Output
\u00c3\u00ad\u00c4\u008d
íÄ
I use...
JavaScript (jQuery)
contentType : "application/json; charset=utf-8",
dataType : "json"
Zend Framework
$view->setEncoding('UTF-8');
$view->headMeta()->appendHttpEquiv('Content-Type', 'text/html;charset=utf-8');
header('Content-Type: application/json; charset=utf-8');
utf8_encode();
Zend_Json::encode
Database
resources.db.params.charset = "utf8"
resources.db.params.driver_options.1002 = "SET NAMES utf8"
resources.db.isDefaultTableAdapter = true
Collation
utf8_unicode_ci
Type
MyISAM
Server
PHP Version 5.2.6
What did I do wrong? Thank you for your reply!
utf8_encode();
If you've got UTF-8 strings from your database and UTF-8 strings from your browser, then you don't need to utf8_encode any more. You've already got UTF-8 strings; calling this function again will just give you the UTF-8 representation of what you'd get if you read UTF-8 bytes as ISO-8859-1 by mistake.
Pass your untouched UTF-8 strings straight to the JSON encoder.
I think this question is some how related to yours
my problem was when encoding some [ Arabic , Hebrew or Chinese as you might see ]
turns out that unicode notation understood by javascript/ecmascript like what did you see
I hope that explain to you in details

How do I get DBIx::Class to collate with UTF-8?

I am trying to implement a web search functionality on a MySQL database using DBIx::Class::Resultset. I have it working apart from one problem: when searching for 'ü' the search is performed by MySQL as 'u' (in other words without the umlaut). The same is done for the other 'extended ASCII' characters. The table and the connection is in UTF8.
I have done some testing on the database and found the solution: add 'collate utf8_bin' the the where clause, as in:
SELECT name FROM my_table WHERE name LIKE '%ü%' COLLATE utf8_bin;
But how do I implement this with DBIx::Class? My search does a 'WHERE ... LIKE' on two tables in the one query.
Thanks in advance,
Mauritz
$rs->search({
name => \'LIKE ? COLLATE utf8_bin'
}, {
bind => [ '%' . $search_key . '%' ]
}
);
perhaps?