Summary
I failed to import CSV files into a table on PostgreSQL.
Even though it says that the import was successfully completed, there's no rows created.
How did this happen, and how can I fix this? Thank you.
Details
1. The CSV file I (failed to) imported, is like this 1. CSV file imported
| number | ticket | category | question | answer | url | note |
|--------|------------|-----------|--------------------------|-----------------------|----------------|----------|
| 1 | #0000000 | Temp>123 | *confirming* | Would you...? | https:///....a | - |
| 2 | #1234567 | AAA / BBB | "a" vs "b" | If A, "a". If B, "b". | https:///....b | #0000000 |
| 3 | #1234567-2 | AAA>abc | Can we do sth using "a"? | Yes, blah blah blah. | https:///....b | - |
And this is the table on PostgreSQL
numberr : numeric
ticketr : char
category : char[]
question : char
answer : char
url : char
note : char
2.\ The message after the import
Even though it says that the import was "successfully completed"
When I hit “More details” of the import pop up (3. Message - Completed)
--command " "\\copy public.test (\"number\", ticket, category, question, answer, url, note) FROM '/Users/alice/Desktop/test.csv' DELIMITER ',' CSV HEADER ENCODING 'UTF8' QUOTE '\"' ESCAPE '''';""
3. The message when I made sure that the file was actually imported
When I click "Count Rows", it says "Table rows counted: 0"
I tried the following script in Query Tool of the table, and it shows no rows created
SELECT * FROM (table name)
For references
Created Postgres Log, but only the header is created.
Screenshot
4. No row created / 1. CSV file imported / 2. Import Preference / 3. Message - Completed / 5. postgres_log
After changing the name of a column from "number" to "consecutive", the error message showed up in Query Tool (not in Import/Export)
Tried Query Tool instead of Import/Emport
--> the situation didn’t change
Changed the first column name from “number” to “constructive” in both csv and psql table
--> the situation didn’t change
Tried Query Tool
copy public.test (consecutive, ticket, category, question, answer, url, note) FROM '/Users/alice/Desktop/test5.csv' DELIMITER ',' CSV HEADER ENCODING 'UTF8' QUOTE '"' ESCAPE '''';""
--> the situation didn’t change
Tried Query Tool
copy public.test (consecutive, ticket, category, question, answer, url, note) FROM '/Users/alice/Desktop/test5.csv' DELIMITER ',' CSV HEADER ENCODING 'UTF8'
—>got error message
ERROR: could not open file "/Users/alice/Desktop/test5.csv" for reading: Permission denied
HINT: COPY FROM instructs the PostgreSQL server process to read a file. You may want a client-side facility such as psql's \copy.
SQL state: 42501
check columns settings in tab at 2. Import Preference image
right from options tab
there you should set columns order as in your file
also check more details at 3. Message - Cpmpleted
This is a file permission issue.
Open a shell terminal, go to the directory where the data file is stored, and run chmod +rx * and retry loading the data file into your DB.
Related
I found one strange problem. With greenplum gpload tool, I tried to import data from a single text file to greenplum db.
The content of file t1.out is:
\N|24234243
\N|\N
12342|\N
and gpload version is:
gpload version 5.3.0 build
commit:2155c5a8cf8bb7f13f49c6e248fd967a74fed591
and the table t1 is as follow:
test=# \d t1;
Table "public.t1"
Column | Type | Modifiers
--------+--------+-----------
id1 | bigint |
id2 | bigint |
when I use gpload with t1.yaml, I got following error:
2019-04-09 20:12:18|WARN|Please use following query to access the detailed error
2019-04-09 20:12:18|WARN|select * from p_read_error_log('ext_gpload_reusable_b7ef1344_5ac0_11e9_b6fc_fa163e2d09a1') where cmdtime > to_timestamp('1554811937.76')
and when I ran this sql(made small change only selecting two key fields) in postgresql, I got following errors:
errmsg | rawdata
invalid input syntax for integer: "\N", column id1 | \N|24234243
invalid input syntax for integer: "\N", column id1 | \N|\N
invalid input syntax for integer: "\N", column id2 | 12342|\N
It showed all the 3 lines were not imported with 'invalid input syntax for integer: "\N"'.
but I can use COPY command to import above 3 items into t1 successfully.
I tried several ways to find why, but failed. Part of my t1.yaml as follows:
- FORMAT: text
- DELIMITER: '|'
- ESCAPE: 'OFF'
- NULL_AS: '\N'
- ERROR_LIMIT: 100
BTW:https://gpdb.docs.pivotal.io/530/utility_guide/admin_utilities/gpload.html#topic1__cfnullas has shown that the default NULL_AS is \N, and gpload has found \N, and why it failed to mark the two field as NULL?
Any help is appreciated!
What version of GPDB are you using?
There was a known issue in v5 that was fixed in 5.6+
https://gpdb.docs.pivotal.io/560/relnotes/GPDB_561_README.html
29197 - gpload/ gpfdist
When running a gpload operation, the gpfdist utility did not recognize a \N as the NULL character when the gpload configuration file specified a null character null_as: '\N'. When processing the configuration file, the gpload utility incorrectly escaped the backslash () with another backslash.
This issue has been resolved. The gpload utility has been improved to properly handle a backslash when processing the null_as property.
I am trying to import a csv file into the postgres table where I can successfully do so using COPY FROM:
import.sql
\copy myTable FROM '..\CSV_OUTPUT.csv' DELIMITER ',' CSV HEADER;
But that query only adds rows if it is currently not in the database, otherwise it exits with an error. Key (id)=(#) already exists.
myTable
id | alias | address
------+-------------+---------------
11 | red_foo | 10.1.1.11
12 | blue_foo | 10.1.1.12
CSV_OUTPUT.csv
id | alias | address
------+-------------+---------------
10 | black_foo | 10.1.1.11
12 | blue_foo | 10.1.1.12
13 | grey_foo | 10.1.1.13
14 | pink_foo | 10.1.1.14
My desired output is to insert the rows from the csv file into postgresql if address does not exist. myTable should contain grey_foo and pink_foo already but not black_foo since its address already exist.
What should be the right queries to use in order to achieve this? Your suggestions and ideas are highly appreciated.
Copy the data into a staging table first, and then update your main table (myTable) with only the rows with the keys that don't already exist. For example, assuming you have imported the data into a table named staging:
with nw as (
select s.id, s.alias, s.address
from staging as s
left join mytable as m on m.address=s.address
where m.address is null
)
insert into mytable
(id, alias, address)
select id, alias, address
from nw;
If you can upgrade to Postgres 9.5, you could instead use an INSERT command with the ON CONFLICT DO NOTHING clause.
I have this code which used to work with org-table-export, but now It throws an error message at me:
'No such transformation function csv'
#+PROPERTY: table_export_file filename.csv
#+PROPERTY: TABLE_EXPORT_FORMAT csv
| 1 | 2 |
| a | b |
What could be wrong? I'm on Org 8.3.4 with ubuntu 16.04.
I figured it out by trying to change the default value for the variable with M-x customize variable RET org-table-export-.. (Tab completion), which showed me that I had apparently set it wrong to start with; my property should have looked the following way:
#+PROPERTY: TABLE_EXPORT_FORMAT orgtbl-to-csv
Mystery solved.
There are few ways to store report output in JR Server: FS, FTP and Repository. The repository output is the default one. I guess the files in the repository must be stored in the DB or file system. Are the files kept forever? How can I manage the repository and for example set a file's lifetime?
The repository outputs are stored in the database. Usually there is no need to set the lifetime.
As of JasperReports Server v 6.3.0 the reference to all resources is kept in jiresource table, while content of is kept in jiresource.
In my case I was able to retrieve all output reports with:
select r.id,r.name,r.creation_date
from jiresource r, jicontentresource c
where r.id = c.id;
The definition of jicontentresource is
jasperserver=# \d+ jicontentresource
id | bigint | not null | plain | |
data | bytea | | extended | |
file_type | character varying(20) | | extended | |
Sometimes one gets a message like:
Sugar CRM 6.4.5 Files May Only Be Used With A Sugar CRM 6.4.5 Database.
I am wondering how Sugar determines what version of the database it is using. In the above case, I get the following output:
select * from config where name='sugar_version';
+----------+---------------+-------+
| category | name | value |
+----------+---------------+-------+
| info | sugar_version | 6.4.5 |
+----------+---------------+-------+
1 row in set (0.00 sec)
cat config.php |grep sugar_version
'sugar_version' => '6.4.5',
Given the above output, I am wondering how to debug the output "Sugar CRM 6.4.5 Files May Only Be Used With A Sugar CRM 6.4.5 Database.": Sugar seems to think the files are not of version 6.4.5 even though the sugar_version is 6.4.5 in config.php; where should I look next?
Two options for the issue:
Option 1: Update your database for the latest version.
Option 2: Follow the steps below and change the SugarCRM cnfig version.
mysql> select * from config where name ='sugar_version';
+----------+---------------+---------+----------+
| category | name | value | platform |
+----------+---------------+---------+----------+
| info | sugar_version | 7.7.0.0 | NULL |
+----------+---------------+---------+----------+
1 row in set (0.00 sec)
Update your sugarcrm version to apporipriate :
mysql> update config set value='7.7.1.1' where name ='sugar_version';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0
The above commands seem to be correct. Sugar seems to check that config.php and the config table in the database contain the same version. In my case I was making the mistake of using the wrong database -- so if you're like me and tend to have your databases mixed up, double check in config.php that 'dbconfig' is indeed pointing to the right database.