PSQL - Copy from csv file if column item does not exist - postgresql

I am trying to import a csv file into the postgres table where I can successfully do so using COPY FROM:
import.sql
\copy myTable FROM '..\CSV_OUTPUT.csv' DELIMITER ',' CSV HEADER;
But that query only adds rows if it is currently not in the database, otherwise it exits with an error. Key (id)=(#) already exists.
myTable
id | alias | address
------+-------------+---------------
11 | red_foo | 10.1.1.11
12 | blue_foo | 10.1.1.12
CSV_OUTPUT.csv
id | alias | address
------+-------------+---------------
10 | black_foo | 10.1.1.11
12 | blue_foo | 10.1.1.12
13 | grey_foo | 10.1.1.13
14 | pink_foo | 10.1.1.14
My desired output is to insert the rows from the csv file into postgresql if address does not exist. myTable should contain grey_foo and pink_foo already but not black_foo since its address already exist.
What should be the right queries to use in order to achieve this? Your suggestions and ideas are highly appreciated.

Copy the data into a staging table first, and then update your main table (myTable) with only the rows with the keys that don't already exist. For example, assuming you have imported the data into a table named staging:
with nw as (
select s.id, s.alias, s.address
from staging as s
left join mytable as m on m.address=s.address
where m.address is null
)
insert into mytable
(id, alias, address)
select id, alias, address
from nw;
If you can upgrade to Postgres 9.5, you could instead use an INSERT command with the ON CONFLICT DO NOTHING clause.

Related

Can't import a CSV file into PostgreSQL

Summary
I failed to import CSV files into a table on PostgreSQL.
Even though it says that the import was successfully completed, there's no rows created.
How did this happen, and how can I fix this? Thank you.
Details
1. The CSV file I (failed to) imported, is like this 1. CSV file imported
| number | ticket | category | question | answer | url | note |
|--------|------------|-----------|--------------------------|-----------------------|----------------|----------|
| 1 | #0000000 | Temp>123 | *confirming* | Would you...? | https:///....a | - |
| 2 | #1234567 | AAA / BBB | "a" vs "b" | If A, "a". If B, "b". | https:///....b | #0000000 |
| 3 | #1234567-2 | AAA>abc | Can we do sth using "a"? | Yes, blah blah blah. | https:///....b | - |
And this is the table on PostgreSQL
numberr : numeric
ticketr : char
category : char[]
question : char
answer : char
url : char
note : char
2.\ The message after the import
Even though it says that the import was "successfully completed"
When I hit “More details” of the import pop up (3. Message - Completed)
--command " "\\copy public.test (\"number\", ticket, category, question, answer, url, note) FROM '/Users/alice/Desktop/test.csv' DELIMITER ',' CSV HEADER ENCODING 'UTF8' QUOTE '\"' ESCAPE '''';""
3. The message when I made sure that the file was actually imported
When I click "Count Rows", it says "Table rows counted: 0"
I tried the following script in Query Tool of the table, and it shows no rows created
SELECT * FROM (table name)
For references
Created Postgres Log, but only the header is created.
Screenshot
4. No row created / 1. CSV file imported / 2. Import Preference / 3. Message - Completed / 5. postgres_log
After changing the name of a column from "number" to "consecutive", the error message showed up in Query Tool (not in Import/Export)
Tried Query Tool instead of Import/Emport
--> the situation didn’t change
Changed the first column name from “number” to “constructive” in both csv and psql table
--> the situation didn’t change
Tried Query Tool
copy public.test (consecutive, ticket, category, question, answer, url, note) FROM '/Users/alice/Desktop/test5.csv' DELIMITER ',' CSV HEADER ENCODING 'UTF8' QUOTE '"' ESCAPE '''';""
--> the situation didn’t change
Tried Query Tool
copy public.test (consecutive, ticket, category, question, answer, url, note) FROM '/Users/alice/Desktop/test5.csv' DELIMITER ',' CSV HEADER ENCODING 'UTF8'
—>got error message
ERROR: could not open file "/Users/alice/Desktop/test5.csv" for reading: Permission denied
HINT: COPY FROM instructs the PostgreSQL server process to read a file. You may want a client-side facility such as psql's \copy.
SQL state: 42501
check columns settings in tab at 2. Import Preference image
right from options tab
there you should set columns order as in your file
also check more details at 3. Message - Cpmpleted
This is a file permission issue.
Open a shell terminal, go to the directory where the data file is stored, and run chmod +rx * and retry loading the data file into your DB.

How Jasperreports Server stores report output internally?

There are few ways to store report output in JR Server: FS, FTP and Repository. The repository output is the default one. I guess the files in the repository must be stored in the DB or file system. Are the files kept forever? How can I manage the repository and for example set a file's lifetime?
The repository outputs are stored in the database. Usually there is no need to set the lifetime.
As of JasperReports Server v 6.3.0 the reference to all resources is kept in jiresource table, while content of is kept in jiresource.
In my case I was able to retrieve all output reports with:
select r.id,r.name,r.creation_date
from jiresource r, jicontentresource c
where r.id = c.id;
The definition of jicontentresource is
jasperserver=# \d+ jicontentresource
id | bigint | not null | plain | |
data | bytea | | extended | |
file_type | character varying(20) | | extended | |

pg_stat_statements enabled, but the table does not exist

I've postgresql-9.4 up and running, and I've enabled pg_stat_statements module lately by the help of official documentation.
But I'm getting following error upon usage:
postgres=# SELECT * FROM pg_stat_statements;
ERROR: relation "pg_stat_statements" does not exist
LINE 1: SELECT * FROM pg_stat_statements;
postgres=# SELECT pg_stat_statements_reset();
ERROR: function pg_stat_statements_reset() does not exist
LINE 1: SELECT pg_stat_statements_reset();
I'm logged in to psql with the postgres user.
I've also checked the available extension lists:
postgres=# SELECT * FROM pg_available_extensions WHERE name = 'pg_stat_statements'
;
name | default_version | installed_version | comment
--------------------+-----------------+-------------------+-----------------------------------------------------------
pg_stat_statements | 1.2 | | track execution statistics of all SQL statements executed
(1 row)
And here's the results of the extension versions query:
postgres=# SELECT * FROM pg_available_extension_versions WHERE name = 'pg_stat_statements';
name | version | installed | superuser | relocatable | schema | requires | comment
--------------------+---------+-----------+-----------+-------------+--------+----------+-----------------------------------------------------------
pg_stat_statements | 1.2 | f | t | t | | | track execution statistics of all SQL statements executed
(1 row)
Any help will be appreciated.
Extension isn't installed:
SELECT *
FROM pg_available_extensions
WHERE
name = 'pg_stat_statements' and
installed_version is not null;
If the table is empty, create the extension:
CREATE EXTENSION pg_stat_statements;
I've faced with this issue at configuring Percona Monitoring and Management (PMM) because by some strange reason PMM connecting to database with name postgres, so pg_stat_statements extension have to be created in this database:
yourdb# \c postgres
postgres# CREATE EXTENSION pg_stat_statements SCHEMA public;
Follow below steps:
Create the extension
CREATE EXTENSION pg_stat_statements;
Change in config
alter system set shared_preload_libraries='pg_stat_statements';
Restart
$ systemctl restart postgresql
Verify changes applied or not.
select * from pg_file_Settings where name='shared_preload_libraries';
The applied attribute must be 'true'.
I Had the same issue when deploying the environment using liquibase for the first time.
I understand that my reply maybe is not related with your problem but was the first google result so I think that other guys like me can arrive here with my the same Liquibase Issue.
These are PosGreSQL metadata tables that are retrieved by liquibase when you generate your first xml file.
In my case it only was useless autogenerated code, so I solved it deleteing these lines:
<changeSet author="martinlarizzate (generated)" id="1588181532394-7">
<createView fullDefinition="false" viewName="pg_stat_statements"> SELECT pg_stat_statements.userid,
pg_stat_statements.dbid,
pg_stat_statements.queryid,
pg_stat_statements.query,
pg_stat_statements.calls,
pg_stat_statements.total_time,
pg_stat_statements.min_time,
pg_stat_statements.max_time,
pg_stat_statements.mean_time,
pg_stat_statements.stddev_time,
pg_stat_statements.rows,
pg_stat_statements.shared_blks_hit,
pg_stat_statements.shared_blks_read,
pg_stat_statements.shared_blks_dirtied,
pg_stat_statements.shared_blks_written,
pg_stat_statements.local_blks_hit,
pg_stat_statements.local_blks_read,
pg_stat_statements.local_blks_dirtied,
pg_stat_statements.local_blks_written,
pg_stat_statements.temp_blks_read,
pg_stat_statements.temp_blks_written,
pg_stat_statements.blk_read_time,
pg_stat_statements.blk_write_time
FROM pg_stat_statements(true) pg_stat_statements(userid, dbid, queryid, query, calls, total_time, min_time, max_time, mean_time, stddev_time, rows, shared_blks_hit, shared_blks_read, shared_blks_dirtied, shared_blks_written, local_blks_hit, local_blks_read, local_blks_dirtied, local_blks_written, temp_blks_read, temp_blks_written, blk_read_time, blk_write_time);</createView>
</changeSet>

postgres ALTER TABLE being blocked

Im running Postgres 8.3 and I am having trouble running AN ALTER TABLE ADD COLUMN statement which seems to be blocked by an AccessShareLock when I run this query
SELECT t.relname,l.locktype,page,virtualtransaction,pid,mode,granted FROM pg_locks l, pg_stat_all_tables t WHERE l.relation=t.relid ORDER BY relation asc;
The table's name is dealer.
relname | locktype | page | virtualtransaction | pid | mode | granted
dealer | relation | | 2/40 | 12719 | AccessExclusiveLock | f
dealer | relation | | -1/154985751 | | AccessShareLock | t
I also ran
SELECT * FROM pg_prepared_xacts
That returned
transaction | gid | prepared | owner | database
154985751 | 131075_MS1hMzIwM2E3OmIwMjM6NTQxMGY0MzE6MWM1ZTg5OQ==_YTMyMDNhNzpiMDIzOjU0MTBmNDMxOjFjNWU4OWM= | 2014-09-19 08:01:49.650957+10 | user | database
The transaction id 154985751 looks similar to the virtualtransaction in the pg_locks table -1/154985751
I ran this command to view any processes that may be running queries on the database
ps axu | grep postgres | grep -v idle
and have confirmed there are no other processes running queries on the database.
The log file shows this after the query has been run
2014-11-14 17:25:00.794 EST (pid: 12719) LOG: statement: BEGIN;
2014-11-14 17:25:00.794 EST (pid: 12719) LOG: statement: ALTER TABLE dealer ADD bullet1 varchar;
2014-11-14 17:25:01.795 EST (pid: 12719) LOG: process 12719 still waiting for AccessExclusiveLock on relation 2321398 of database 2321293 after 1000.133 ms
2014-11-14 17:25:01.795 EST (pid: 12719) STATEMENT: ALTER TABLE dealer ADD bullet1 varchar;
What could be causing the AccessShareLock on the dealer table? Im guessing it has something to do with the transaction 154985751 is there a way to terminate a transaction with using the virtual id?
You have a prepared transaction in place. Prepared transactions - those where PREPARE TRANSACTION but not COMMIT PREPARED or ROLLBACK PREPARED has been run - hold locks, just like normal running transactions do.
Prepared transactions may be used by XA transaction managers, JTA, etc, not necessarily directly by your app. Many queuing systems use them too. If you don't know what the transaction is and you commit it or roll it back you may disrupt something that is relying on two-phase commit.
If you are certain that you know what it is you can:
COMMIT PREPARED '131075_MS1hMzIwM2E3OmIwMjM6NTQxMGY0MzE6MWM1ZTg5OQ==_YTMyMDNhNzpiMDIzOjU0MTBmNDMxOjFjNWU4OWM='
or
ROLLBACK PREPARED '131075_MS1hMzIwM2E3OmIwMjM6NTQxMGY0MzE6MWM1ZTg5OQ==_YTMyMDNhNzpiMDIzOjU0MTBmNDMxOjFjNWU4OWM='
depending on whether you wish to commit or abort the prepared xact.
You can't inspect the transaction to see what it did/does, you need to figure out what app/tool created it and why if you don't know what it is.
The identifier looks suspiciously like [number]_[base64]_[base64] so lets see what we can do with that:
postgres=> SELECT decode((string_to_array('131075_MS1hMzIwM2E3OmIwMjM6NTQxMGY0MzE6MWM1ZTg5OQ==_YTMyMDNhNzpiMDIzOjU0MTBmNDMxOjFjNWU4OWM=','_'))[2], 'base64');
decode
------------------------------------------------------------------
\x312d613332303361373a623032333a35343130663433313a31633565383939
(1 row)
postgres=> SELECT decode((string_to_array('131075_MS1hMzIwM2E3OmIwMjM6NTQxMGY0MzE6MWM1ZTg5OQ==_YTMyMDNhNzpiMDIzOjU0MTBmNDMxOjFjNWU4OWM=','_'))[3], 'base64');
decode
--------------------------------------------------------------
\x613332303361373a623032333a35343130663433313a31633565383963
(1 row)
Hm, looks like ASCII or similar, lets see:
postgres=> SELECT convert_from(decode((string_to_array('131075_MS1hMzIwM2E3OmIwMjM6NTQxMGY0MzE6MWM1ZTg5OQ==_YTMyMDNhNzpiMDIzOjU0MTBmNDMxOjFjNWU4OWM=','_'))[2], 'base64'), 'utfpostgres=> SELECT convert_from(decode((string_to_array('131075_MS1hMzIwM2E3OmIwMjM6NTQxMGY0MzE6MWM1ZTg5OQ==_YTMyMDNhNzpiMDIzOjU0MTBmNDMxOjFjNWU4OWM=','_'))[2], 'base64'), 'utf-8');
convert_from
---------------------------------
1-a3203a7:b023:5410f431:1c5e899
(1 row)
postgres=> SELECT convert_from(decode((string_to_array('131075_MS1hMzIwM2E3OmIwMjM6NTQxMGY0MzE6MWM1ZTg5OQ==_YTMyMDNhNzpiMDIzOjU0MTBmNDMxOjFjNWU4OWM=','_'))[3], 'base64'), 'utf-8');
convert_from
-------------------------------
a3203a7:b023:5410f431:1c5e89c
(1 row)
Looks vaguely GUID/UUID-ish, with odd formatting and grouping.
Maybe those identifiers will help you figure out where the xact came from.
BTW, 8.3 is exceedingly obsolete. Plan your upgrade.

DB2 CLI result output

When running command-line queries in MySQL you can optionally use '\G' as a statement terminator, and instead of the result set columns being listed horizontally across the screen, it will list each column vertically, which the corresponding data to the right. Is there a way to the same or a similar thing with the DB2 command line utility?
Example regular MySQL result
mysql> select * from tagmap limit 2;
+----+---------+--------+
| id | blog_id | tag_id |
+----+---------+--------+
| 16 | 8 | 1 |
| 17 | 8 | 4 |
+----+---------+--------+
Example Alternate MySQL result:
mysql> select * from tagmap limit 2\G
*************************** 1. row ***************************
id: 16
blog_id: 8
tag_id: 1
*************************** 2. row ***************************
id: 17
blog_id: 8
tag_id: 4
2 rows in set (0.00 sec)
Obviously, this is much more useful when the columns are large strings, or when there are many columns in a result set, but this demonstrates the formatting better than I can probably explain it.
I don't think such an option is available with the DB2 command line client. See http://www.dbforums.com/showthread.php?t=708079 for some suggestions. For a more general set of information about the DB2 command line client you might check out the IBM DeveloperWorks article DB2's Command Line Processor and Scripting.
Little bit late, but found this post when I searched for an option to retrieve only the selected data.
So db2 -x <query> gives only the result back. More options can be found here: https://www.ibm.com/docs/en/db2/11.1?topic=clp-options
Example:
[db2inst1#a21c-db2 db2]$ db2 -n select postschemaver from files.product
POSTSCHEMAVER
--------------------------------
147.3
1 record(s) selected.
[db2inst1#a21c-db2 db2]$ db2 -x select postschemaver from files.product
147.3
DB2 command line utility always displays data in tabular format. i.e. rows horizontally and columns vertically. It does not support any other format like \G statement terminator do for mysql. But yes, you can store column organized data in DB2 tables when DB2_WORKLOAD=ANALYTICS is set.
db2 => connect to coldb
Database Connection Information
Database server = DB2/LINUXX8664 10.5.5
SQL authorization ID = BIMALJHA
Local database alias = COLDB
db2 => create table testtable (c1 int, c2 varchar(10)) organize by column
DB20000I The SQL command completed successfully.
db2 => insert into testtable values (2, 'bimal'),(3, 'kumar')
DB20000I The SQL command completed successfully.
db2 => select * from testtable
C1 C2
----------- ----------
2 bimal
3 kumar
2 record(s) selected.
db2 => terminate
DB20000I The TERMINATE command completed successfully.