Let say you have a SELECT id from table query (the real case is a complex query) that does return you several results.
The problem is how to get all id return in a single row, comma separated?
SELECT string_agg(id::text, ',') FROM table
Requires PostgreSQL 9.0 but that's not a problem.
You can use the array() and array_to_string() functions togetter with your query.
With SELECT array( SELECT id FROM table ); you will get a result like: {1,2,3,4,5,6}
Then, if you wish to remove the {} signs, you can just use the array_to_string() function and use comma as separator, so: SELECT array_to_string( array( SELECT id FROM table ), ',' ) will get a result like: 1,2,3,4,5,6
You can generate a CSV from any SQL query using psql:
$ psql
> \o myfile.csv
> \f ','
> \a
> SELECT col1 AS column1, col2 AS column2 ... FROM ...
The resulting myfile.csv will have the SQL resultset column names as CSV column headers, and the query tuples as CSV rows.
h/t http://pookey.co.uk/wordpress/archives/51-outputting-from-postgres-to-csv
use array_to_string() & array() function for the same.
select array_to_string(array(select column_name from table_name where id=5), ', ');
Use this below query it will work and gives the exact result.
SELECT array_to_string(array_agg(id), ',') FROM table
Output : {1,2,3,4,5}
SELECT array_agg(id, ',') FROM table
{1,2,3,4}
I am using Postgres 11 and EntityFramework is fetching it as array of integers.
Related
I have the following query
SELECT DISTINCT ON (user_id) user_id, timestamp
FROM entries
WHERE user_id in (1,2)
AND entry_type IN(
SELECT jsonb_array_elements_text(
SELECT entry_types
FROM users INNER JOIN orgs
ON org_id = orgs.id
WHERE users.id = 1
)
);
I'm getting a syntax error at or near select
syntax error at or near "select" LINE 1: ... entry_type in( select
jsonb_array_elements_text(select ent.
The field entry_types is a JSONB field, so I am trying to convert it to text in order to use it in the WHERE IN clause.
PostgreSQL 13.0
This sub-query within jsonb_array_elements_text
SELECT entry_types
FROM users INNER JOIN orgs
ON org_id = orgs.id
WHERE users.id = 1
Returns a single JSONB entry like this:
entry_types
--------------------------------------------
["type1", "type2", "type3"]
I'm simply trying to use the array of text values returned there as the criteria inside the WHERE IN clause.
The syntax error seems to point somewhere else, so maybe I am wrong, but the problem I see is a missing pair of parentheses around the subquery:
jsonb_array_elements_text((SELECT ...))
I have following with statement and copy command
with output01 as
(select * from (
select name,
case
when column1 is not null and lower(column1) in ('point1','point2','point3','point4') then 3456
else null end column1Desc,
case
when column2 is not null and lower(column2) in ('point1','point2','point3','point4') then 2456
else null end column2Desc,
column3, column4),
output02 as
(select * from (
select name,
case
when column1 is not null and lower(column1) in ('point1','point2','point3','point4') then 3456
else null end column1Desc,
case
when column2 is not null and lower(column2) in ('point1','point2','point3','point4') then 2456
else null end column2Desc,
column3, column4),
output3 as (SELECT * FROM output01 UNION ALL SELECT * FROM output02)
\copy (select * from output3) to '/usr/share/output.csv' with CSV ENCODING 'UTF-8' DELIMITER ',' HEADER;
I am getting following ERROR
ERROR: relation "tab3" does not exist
All psql backslash commands need to be written on a single line, so you can't have a multi-line query together with \copy. The only workaround is to create a (temporary) view with that query, then use that in the \copy command.
Something along the lines:
create temporary view data_to_export
as
with cte as (..)
select *
from cte
;
\copy (select * data_to_export) to ...
You are getting this error because you are running your CTE query and copy command in different statements. Considering your with query is working fine, you should write your copy statement like below:
\copy (WITH tab1 as (Your SQL statement),
tab2 as ( SELECT ... FROM tab1 WHERE your filter),
tab3 as ( SELECT ... FROM tab2 WHERE your filter)
SELECT * FROM tab3) to '/usr/share/results.csv' with CSV ENCODING 'UTF-8' DELIMITER ',' HEADER;
The use case requires running of exclusion queries.
Something like:
select col1
from awesome_table
where col2 not in (a,b,c,d)
and col3 not in (a1,a2,a3,a4);
As the set of excluded col1 values and excluded col2 values is variable sized, what is a good way to generate the prepared statement?
One hack that I can think of is to define an upper limit on the set say 15 and fill all placeholders with repeated values if number of query set size input by user is less than max value, is there a better way? And how are prepared statements suppose to handle this, as per the philosophy of the community?
Can you pass (Postgres) arrays from Go?
Then you could rewrite the statement to
where col2 <> ALL ($1)
and col3 <> all ($2)
where $1 and $2 are (Postgres) arrays containing the values.
If you can't pass proper array instances, you can pass the values as a string that's formatted so that it can be cast to an array.
select col1
from awesome_table
where col2 <> ALL ( (cast $1 as int[]) )
and col3 <> ALL ( (cast $2 as text[]) );
Then you could pass '{1,2,3}' for the first parameter and e.g. '{"foo", "bar"}' as the second parameter. You need to adjust the array types to the actual data types of your columns
Adding to #a_horse_with_no_name's answer,
In Golang, the psql driver github.com/lib/pq contains a method Array() that can be used to convert a Golang slice into a psql Array.
...
import (
"github.com/lib/pq"
)
...
select col1
from awesome_table
where col2 <> ALL ($1)
and col3 <> ALL ($2);
where
slice1 := []string{val1, val2}
slice2 := []string{val3, val4}
pq.Array(slice1) can be passed for $1 and pq.Array(slice2) can be passed for $2 placeholder while passing the values in the prepared statements.
More about ANY and ALL functions can be found at here
I have multiple tables that all have the same columns, but in different order. I want to merge them all together. I've created an empty table with the standard columns in the order I would like. I've tried inserting with
insert into master_table select * from table1;
but that doesn't work because of the differing column order - some of the values end up in the wrong columns. What is the best way to create one table out of them all in the order specified in my empty master table?
If you are dealing with many columns and many tables, you can use the information_schema to get the columns. You can loop through all the tables you want to insert from and run this in a plpgsql procedure, replacing table1 with a variable:
EXECUTE (
SELECT
'insert into master_table
(' || string_agg(quote_ident(column_name), ',') || ')
SELECT ' || string_agg('p.' || quote_ident(column_name), ',') || '
FROM table1 p '
FROM information_schema.columns raw
WHERE table_name = 'master_table');
just indicate the proper order in the select
instead of
select *
if you want 3 field on second posiition.
select field1, field3, field2
or you can use the INSERT sintaxis
INSERT INTO master_table (field1, field3, field2)
SELECT *
I am using Postgresql 9.3 and what I am trying to do is Passing column names as string into my query. For newtable my column number can be dynamic sometimes it might be 3 or more for which I am trying to select column from another table and pssing relut of my query as string in the existing query
Please help how can i do this
select * from crosstab (
'select "TIMESTAMP_S","VARIABLE","VALUE" from archieve_export_db_a3 group by 1,2,3 order by 1,2',
'select distinct "VARIABLE" From archieve_export_db_variables order by 1'
) AS newtable (TIMESTAMP_S int,_col1 integer,_col2 integer);