Some Opencart's modules doesn't support UTF8 and showing ???? instead of characters - mysqli

in some cases, modules for Opencart doesn't support RTL languages and UTF8 characters and it will show ????????? characters instead of your Persian/Arabic characters.what I should do with these modules to show up my characters correctly?

there is several ways:
1) Use sql query:
In this case you can use some queries like bellow:
$this->db->query("SET NAMES 'utf8'");
$this->db->query("SET CHARACTER SET utf8;");
$this->db->query("SET character_set_connection=utf8;");
You should put these queries in your database driver file. here I am using mysqli then I should put codes in mysqli.php in this directory opencart\system\library\db\mysqli.php like bellow:
public function __construct($hostname, $username, $password, $database, $port = '3306') {
$this->link = new \mysqli($hostname, $username, $password, $database, $port);
if ($this->link->connect_error) {
trigger_error('Error: Could not make a database link (' . $this->link->connect_errno . ') ' . $this->link->connect_error);
exit();
}
$this->link->query("SET NAMES 'utf8'");
$this->link->query("SET CHARACTER SET utf8");
$this->link->query("SET character_set_connection=utf8;");
$this->link->query("SET SQL_MODE = ''");
}
2) Change database charset:
But is some cases it wont solve your problem. then you should check your database Collation for all tables and columns inside tables and you should set it to utf8_general_ci.
To do this you can use ALTER TABLE YOUR_TABLE_NAME DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci; to change character set for Tables and use ALTER TABLE YOUR_TABLE_NAME CHANGE COLUMN_NAME CHARACTER SET utf8 COLLATE utf8_general_ci; to change columns character set.
Please note, if there is so many tables and columns, you can export your file to .sql format and then open it with notepad and replace all latin1 (it's my file charset, maybe it be different in your file), to utf8 and save it and use this new database file.
3) Change file format:
In this case, you should open your file with notepad and use file menu / save as, and in save as windows change encoding to UTF-8 (it mostly help if the file is using echo or print to show some strings...
Hope it helps.

Related

How to import in format CSV chars as in Local regional language PostgreSQL

I want to upload a CSV file with having regional language character in some of values , e.g. if format is like
FirstName,LastName,DOB,State
Rahul,Gour,25-Mar-1988,Delhi
രാഹുൽ,ഗൗർ,24-മാർ-1987,Kerala
in above format some line exists with loacal langauge (Malyalam) when i uploading this file data where this speical charater showing data as "????????????
Is there any format i can use to upload this data as it is , or we cannot be do this in PostgreSQL.
Please help.
You have to figure out what kind of encoding the CSV file uses. Most probably it is using the UTF-8 encoding. Afterwards you can just use:
copy tablename (firstname, lastname, dob, state)
from /path/to/the/file.csv
with (encoding 'UTF-8', format csv);
If the server doesn't have access to the file you can use the equivalent \copy command for psql.

How to convert hex characters when using Postgres COPY FROM?

I am importing data from a file to PostgreSQL database table using COPY FROM.
Some of the strings in my file contain hex characters (mostly \x0d and \x0a) and I'd like them to be converted into regular text using COPY.
My problem is that they are treated as regular text and remain in the string unchanged.
How can I get the hex values converted?
Here is a simplified example of my situation:
-- The table I am importing to
CREATE TABLE my_pg_table (
id serial NOT NULL,
value text
);
COPY my_pg_table(id, data)
FROM 'location/data.file'
WITH CSV
DELIMITER ' ' -- this is actually a tab
QUOTE ''''
ENCODING 'UTF-8'
Example file:
1 'some data'
2 'some more data \x0d'
3 'even more data \x0d\x0a'
Note: the file is tab delimited.
Now, doing:
SELECT * FROM my_pg_table
would get me results containing hex.
Additional info for context:
My task is to export data from sybase tables (many hundreds) and import to Postgres. I am using UNLOAD to export data to files like so:
UNLOAD
TABLE my_sybase_table
TO 'location/data.file'
DELIMITED BY ' ' -- this is actually a tab
BYTE ORDER MARK OFF
ENCODING 'UTF-8'
It seems to me that (for a reason I don't understand) hex is only converted when using FORMAT TEXT and FORMAT CSV will treat it as regular string.
Solving the problem in my situation:
Because I had to use TEXT I didn't have the QUOTE option anymore and because of that I couldn't have quoted strings in my files anymore. So I needed my files in a little different format and eventually used this to export my table from sybase:
UNLOAD
SELECT
COALESCE(cast(id as long varchar), '(NULL)'),
COALESCE(cast(data as long varchar), '(NULL)')
FROM my_sybase_table
TO 'location/data.file'
DELIMITED BY ' ' -- still tab delimited
BYTE ORDER MARK OFF
QUOTES OFF
ENCODING 'UTF-8'
and to import it to postgres:
COPY my_pg_table(id, data)
FROM 'location/data.file'
DELIMITER ' ' -- tab delimited
NULL '(NULL)'
ENCODING 'UTF-8'
I used (NULL), because I needed a way to differentiate between an empty string and null. I casted every column to long varchar, to make my mass export/import more convenient.
I'd be still very interested to know why hex wouldn't convert when using FORMAT CSV.

PG COPY error: invalid input syntax for integer

Running COPY results in ERROR: invalid input syntax for integer: "" error message for me. What am I missing?
My /tmp/people.csv file:
"age","first_name","last_name"
"23","Ivan","Poupkine"
"","Eugene","Pirogov"
My /tmp/csv_test.sql file:
CREATE TABLE people (
age integer,
first_name varchar(20),
last_name varchar(20)
);
COPY people
FROM '/tmp/people.csv'
WITH (
FORMAT CSV,
HEADER true,
NULL ''
);
DROP TABLE people;
Output:
$ psql postgres -f /tmp/sql_test.sql
CREATE TABLE
psql:sql_test.sql:13: ERROR: invalid input syntax for integer: ""
CONTEXT: COPY people, line 3, column age: ""
DROP TABLE
Trivia:
PostgreSQL 9.2.4
ERROR: invalid input syntax for integer: ""
"" isn't a valid integer. PostgreSQL accepts unquoted blank fields as null by default in CSV, but "" would be like writing:
SELECT ''::integer;
and fail for the same reason.
If you want to deal with CSV that has things like quoted empty strings for null integers, you'll need to feed it to PostgreSQL via a pre-processor that can neaten it up a bit. PostgreSQL's CSV input doesn't understand all the weird and wonderful possible abuses of CSV.
Options include:
Loading it in a spreadsheet and exporting sane CSV;
Using the Python csv module, Perl Text::CSV, etc to pre-process it;
Using Perl/Python/whatever to load the CSV and insert it directly into the DB
Using an ETL tool like CloverETL, Talend Studio, or Pentaho Kettle
I think it's better to change your csv file like:
"age","first_name","last_name"
23,Ivan,Poupkine
,Eugene,Pirogov
It's also possible to define your table like
CREATE TABLE people (
age varchar(20),
first_name varchar(20),
last_name varchar(20)
);
and after copy, you can convert empty strings:
select nullif(age, '')::int as age, first_name, last_name
from people
Just came across this while looking for a solution and wanted to add I was able to solve the issue by adding the "null" parameter to the copy_from call:
cur.copy_from(f, tablename, sep=',', null='')
I got this error when loading '|' separated CSV file although there were no '"' characters in my input file. It turned out that I forgot to specify FORMAT:
COPY ... FROM ... WITH (FORMAT CSV, DELIMITER '|').
Use the below command to copy data from CSV in a single line without casting and changing your datatype.
Please replace "NULL" by your string which creating error in copy data
copy table_name from 'path to csv file' (format csv, null "NULL", DELIMITER ',', HEADER);
I had this same error on a postgres .sql file with a COPY statement, but my file was tab-separated instead of comma-separated and quoted.
My mistake was that I eagerly copy/pasted the file contents from github, but in that process all the tabs were converted to spaces, hence the error. I had to download and save the raw file to get a good copy.
CREATE TABLE people (
first_name varchar(20),
age integer,
last_name varchar(20)
);
"first_name","age","last_name"
Ivan,23,Poupkine
Eugene,,Pirogov
copy people from 'file.csv' with (delimiter ';', null '');
select * from people;
Just in first column.....
Ended up doing this using csvfix:
csvfix map -fv '' -tv '0' /tmp/people.csv > /tmp/people_fixed.csv
In case you know for sure which columns were meant to be integer or float, you can specify just them:
csvfix map -f 1 -fv '' -tv '0' /tmp/people.csv > /tmp/people_fixed.csv
Without specifying the exact columns, one may experience an obvious side-effect, where a blank string will be turned into a string with a 0 character.
this ought to work without you modifying the source csv file:
alter table people alter column age type text;
copy people from '/tmp/people.csv' with csv;
There is a way to solve "", the quoted null string as null in integer column,
use FORCE_NULL option :
\copy table_name FROM 'file.csv' with (FORMAT CSV, FORCE_NULL(column_name));
see postgresql document, https://www.postgresql.org/docs/current/static/sql-copy.html
All in python (using psycopg2), create the empty table first then use copy_expert to load the csv into it. It should handle for empty values.
import psycopg2
conn = psycopg2.connect(host="hosturl", database="db_name", user="username", password="password")
cur = conn.cursor()
cur.execute("CREATE TABLE schema.destination_table ("
"age integer, "
"first_name varchar(20), "
"last_name varchar(20)"
");")
with open(r'C:/tmp/people.csv', 'r') as f:
next(f) # Skip the header row. Or remove this line if csv has no header.
conn.cursor.copy_expert("""COPY schema.destination_table FROM STDIN WITH (FORMAT CSV)""", f)
Incredibly, my solution to the same error was to just re-arrange the columns. For anyone else doing the above solutions and still not getting past the error.
I apparently had to arrange the columns in my CSV file to match the same sequence in the table listing in PGADmin.

Escaping Backslashes in Postgresql

I need to write a file to disk from postgres that has character string of a backslash immediately followed by a forward slash \/
Code similar to this has not worked:
drop table if exists test;
create temporary table test (linetext text);
insert into test values ('\/\/foo foo foo\/bar\/bar');
copy (select linetext from test) to '/filepath/postproductionscript.sh';
The above code yields \\/\\/foo foo foo\\/bar\\/bar ... it inserts an extra backslash.
When you view the temp table, the string is correctly viewed as \/\/, so I am not sure where or when the text is changed into \\/\\/
I've tried doubling the \, variations of E before the string, and quote_literal() without luck.
I have note found a solution here Postgres Manual
Running Postgres 9.2, encoded UTF-8.
The problem is that COPY is not intended to write out plain-text files. It is intended to write out files that can be read back by COPY. And the semi-internal encoding that it uses does some backslash escaping.
For what you want to do, you need to write some custom code. Either use a normal client library to read the query results and write them to a file, or, if you want to do it in-server, use something like PL/Perl or PL/Python.
The \ excaping is only recognised if the stringliteral is prefixed with E , otherwise the standard_conforming_strings setting (or the like) is respected (ANSI-SQL has a different way of string escaping, probably stemming from COBOL;-).
drop table if exists test;
create temporary table test (linetext text);
insert into test values ( E'\/\/foo foo foo\/bar\/bar');
copy (select linetext from test) to '/tmp/postproductionscript.sh';
UPATE: an ugly hack is to use .csv format and still use \t as delimter.
The #!/bin/sh as a shebang headerline should be consdered a feature
-- without a header line
drop table if exists test;
create temporary table test (linetext text);
insert into test values ( '\/\/foo foo foo\/bar\/bar');
copy (select linetext AS "#linetext" from test) to '/tmp/postproductionscript_c.sh'
WITH CSV
DELIMITER E'\t'
;
-- with a shebang header line
drop table if exists test;
create temporary table test (linetext text);
insert into test values ( '\/\/foo foo foo\/bar\/bar');
copy (select linetext AS "#/bin/sh" from test) to '/tmp/postproductionscript_h.sh'
WITH CSV
HEADER
DELIMITER E'\t'
;

PostgreSQL 9.0 replace function not working for one character

im working with PostgreSQl 9.0
and i have a table from which i need to replace a character with ''(blank space)
for that im using
update species set engname = replace(engname, '', '');
(this is the query image)
(image is posted)
in the case species is the table and engname is the field(character varying)..
the contens of one of the row is
" -tellifer fÂÂrthii"
even after firing the query the character is not replaced.
i have tried with
update species set sciname = regexp_replace(sciname, '', '')
but the character doesnot get replace
my database is
CREATE DATABASE myDB
WITH OWNER = Myadmin
ENCODING = 'SQL_ASCII'
TABLESPACE = pg_default
LC_COLLATE = 'C'
LC_CTYPE = 'C'
CONNECTION LIMIT = -1;
We are planning to move to UTF-8 encoding but during conversion with iconv the conversion fails because of this
so i wanted to replace the character with..
can anyone tell me how to remove that character?
this symbol can be used for more characters - so you cannot to use replace. Probably your client application uses a different encoding than database. Symbol is used to signalisation broken encoding.
Solution is using correct encoding
postgres=# select * from ff;
a
───────────────
žluťoučký kůň
(1 row)
postgres=# set client_encoding to 'latin2'; --setting wrong encoding
SET
postgres=# select * from ff; -- and you can see strange symbols
a
───────────────
�lu�ou�k� k�
(1 row)
postgres=# set client_encoding to 'utf8'; -- setting good encoding
SET
postgres=# select * from ff;
a
───────────────
žluťoučký kůň
(1 row)
Other solution is replacing national or special chars by related ascii characters
9.x has unaccent contrib module for utf or for some 8bites encoding there is function to_ascii()