I am trying to execute the following query:
INSERT
INTO rooms(
id,
name,
body,
parents,
tags,
createtime,
creator,
deletetime,
meta,
params,
terms,
updater,
updatetime,
counts,
identities)
SELECT *
FROM dblink ('dbname=oldsb',
'SELECT '
'(SELECT newid FROM id_map WHERE oldid = entities.id) AS id, '
'id AS name, '
'description AS body, '
'NULL AS parents, '
'NULL AS tags, '
'ROUND(EXTRACT(EPOCH FROM current_timestamp)*1000) AS createtime, '
'NULL AS creator, '
'ROUND(EXTRACT(EPOCH FROM deletetime)*1000) AS deletetime, '
'json_build_object(''picture'', picture) AS meta, '
'jsonb_object_agg(
(SELECT * '
'FROM jsonb_each(params) '
'AS fields (name, value) '
'WHERE name <> ''places'')) AS params, '
'terms AS terms, '
'NULL AS updater, '
'EXTRACT(EPOCH FROM lAStseentime)*1000 AS updatetime, '
'NULL AS counts, '
'NULL AS identities '
'FROM entities WHERE type=''room''')
AS t(
id uuid,
name text,
body text,
parents uuid[],
tags smallint[],
createtime bigint,
creator text,
deletetime bigint,
meta jsonb,
params jsonb,
terms tsvector,
updater text,
updatetime bigint,
counts jsonb,
identities text[]);
and I am getting the following error:
Executing Rooms migration query
ERROR: subquery must return only one column
CONTEXT: Error occurred on dblink connection named "unnamed": could not execute query.
table for updating identities in rooms.
I am not able to understand where I am going wrong with the query.
One of Yours subqueries returns multiple results. Probobly this one.
SELECT newid FROM id_map WHERE oldid = entities.id
Add LIMIT 1 to the end and try again. In fact You can add LIMIT 1 to every subquery and see if it helps.
Is it possible that jsonb_each(params) has more than one column?
jsonb_object_agg(
(SELECT * '
'FROM jsonb_each(params) '
'AS fields (name, value) '
'WHERE name <> ''places'')) AS params,
I don't know what jsonb_object_agg() does but I don't think it's designed to handle an input that is a table. Usually, aggregate functions combine one or more rows of results into a single value.
Yes it is. Check this documentation for json_each function. json_each AND jsonb_each work the same way.
Related
I have this table in postgres
CREATE TABLE ct(id SERIAL, rowid TEXT, attribute TEXT, value TEXT);
INSERT INTO ct(rowid, attribute, value) VALUES('test1','att1','val1');
INSERT INTO ct(rowid, attribute, value) VALUES('test1','att2','val2');
INSERT INTO ct(rowid, attribute, value) VALUES('test1','att3','val3');
INSERT INTO ct(rowid, attribute, value) VALUES('test1','att4','val4');
INSERT INTO ct(rowid, attribute, value) VALUES('test2','att1','val5');
INSERT INTO ct(rowid, attribute, value) VALUES('test2','att2','val6');
INSERT INTO ct(rowid, attribute, value) VALUES('test2','att3','val7');
INSERT INTO ct(rowid, attribute, value) VALUES('test2','att4','val8');
I want to generate a dynamic crosstab query using this table.
Till now I have created the static query by following the example on the official postgres documentation page.
select * from crosstab
('select rowid, attribute, value from ct order by 1,2')
as final_result(rowid text, att1 text, att2 text, att3 text, att4 text)
Now I want this part to be dynamic
as final_result(rowid text, att1 text, att2 text, att3 text, att4 text)
I tried few things such as
Creating a query which generate the column name with their types and passing that query in as final_result(query), but it doesn't work as here,
SELECT 'rowid text, '
|| string_agg(Distinct attribute, ' text, ') as name
FROM ct;
select * from crosstab
('select rowid, attribute, value from ct order by 1,2')
as final_result(SELECT 'rowid text, '
|| string_agg(Distinct attribute, ' text, ') as name
FROM ct;)
OR
select * from crosstab
('select rowid, attribute, value from ct order by 1,2',
SELECT 'rowid text, '
|| string_agg(Distinct attribute, ' text, ')) as name
FROM ct;)
Both of these queries doesn't work.
I searched stackoverflow found this link, but it also doesn't have a proper acceptable answer here,
Dynamically generate columns for crosstab in PostgreSQL
Any idea how this can be done.
How do I count the number of distinct elements in an array object, created by ARRAY_AGG() in PostgresQL? Here's a toy example for discussion purposes:
SELECT ARRAY_AGG (first_name || ' ' || last_name) actors
FROM film
I have tried ARRAY_LENGTH(), LENGTH(), etc., like so:
SELECT ARRAY_LENGTH(a.actors)
FROM (SELECT ARRAY_AGG (first_name || ' ' || last_name) actors
FROM film) a;
But I get an error:
function array_length(integer[]) does not exist
Hint: No function matches the given name and argument types. You might need to add explicit type casts.
Position: 208
So I tried (2):
SELECT ARRAY_LENGTH( CAST(COALESCE(a.actors, '0') AS integer) )
FROM (SELECT ARRAY_AGG (first_name || ' ' || last_name) actors
FROM film) a;
but I get the error:
malformed array literal: "0"
Detail: Array value must start with "{" or dimension information.
Position: 119
the function array_length(anyarray, int) require two elements, array and dimension for example:
Select array_length(array[1,2,3], 1);
Result:
3
If you are only dealing with a single dimension array, cardinality() is easier to use:
SELECT cardinality(ARRAY_LENGTH(a.actors))
FROM (
SELECT ARRAY_AGG (first_name || ' ' || last_name) actors
FROM film
) a;
I'm trying to query using ILIKE on a user's name in PostgreSQL. There are columns for first_name and last_name, and I'd like the search term to match against the two concatenated, with a space between, so that a user may search for either, or a full name using one input. "John" "Doe" or "John Doe".
This always returns no results:
SELECT * FROM user_profiles WHERE first_name || ' ' || last_name ILIKE '%ryan%'
This always returns the one result I am expecting:
SELECT * FROM user_profiles WHERE first_name ILIKE '%ryan%'
Based on everything I've read, the first query should work as I am expecting, but it doesn't. No results and no errors. What am I missing here?
The first query would return no results if last_name were NULL (and similarly if first_name were NULL).
So, try this instead:
WHERE first_name || ' ' || COALESCE(last_name, '') ILIKE '%ryan%'
or:
WHERE CONCAT_WS(' ', first_name, last_name) ILIKE '%ryan%'
The concat_ws() function ignores arguments that are not NULL (except for the first argument).
I'm trying to insert point geometry values and other data from one table to another table.
-- create tables
create table bh_tmp (bh_id integer, bh_name varchar
, easting decimal, northing decimal, ground_mod decimal);
create table bh (name varchar);
SELECT AddGeometryColumn('bh', 'bh_geom', 27700, 'POINT',3);
-- popualte bh_tmp
insert into bh_tmp values
(1,'C5',542945.0,180846.0,3.947),
(3,'B24',542850.0,180850.0,4.020),
(4,'B26',543020.0,180850.0,4.020);
-- populate bh from bh_tmp
insert into bh(name, bh_geom) SELECT
bh_name,
CONCAT($$ST_GeomFromText('POINT($$, Easting, ' ', Northing, ' '
, Ground_mOD, $$)', 27700)$$);
FROM bh_tmp;
Gives this error:
ERROR: parse error - invalid geometry
SQL state: XX000
Hint: "ST" <-- parse error at position 2 within geometry
I can't see anything wrong with the ST_GeomFromText string that I've specified. But I can populate table bh if I insert rows 'manually', e.g.:
INSERT INTO bh (name, bh_geom)
VALUES ('C5' ST_GeomFromText('POINT(542945.0 180846.0 3.947)', 27700));
What am I doing wrong?
First of all, there is a misplaced semicolon after CONCAT(...);
And you can't concatenate the function name itself into the string:
INSERT INTO bh(name, bh_geom)
SELECT bh_name
, ST_GeomFromText('POINT(' || concat_ws(' ', easting, northing, ground_mod) || ')'
, 27700)
FROM bh_tmp;
Or, since you have values already (not text), you could use ST_MakePoint() and ST_SetSRID():
ST_SetSRID(ST_MakePoint(easting, northing, ground_mod), 27700)
Should be faster.
Npgsql parameterized query output incompatible with PostGIS
You're getting that error because the output of the CONCAT function is text, and your bh_geom column is geometry, so you're trying to insert text into geometry. This will work:
INSERT INTO bh(name, bh_geom) SELECT
bh_name,
ST_GeomFromText('POINT('
|| easting|| ' '
|| Northing
|| ' '
|| Ground_mOD
|| ')', 27700)
FROM bh_tmp;
Duplicate of: TSQL varchar string manipulation
I'm building a dynamic SQL statement out of parameters from a reporting services report. Reporting services passes MutiValue Parameters in a basic CSV format. For example a list of states may be represented as follows: AL,CA,NY,TN,VA
In a SQL statement this is OK:
WHERE customerState In (#StateList)
However, the dynamic variant isn't OK:
SET #WhereClause1 = #WhereClause1 + 'AND customerState IN (' + #StateList + ') '
This is because it translates to (invalid SQL):
AND customerState IN (AL,CA,NY,TN,VA)
To process it needs something like this:
AND customerState IN ('AL','CA','NY','TN','VA')
Is there some cool expression I can use to insert the single quotes into my dynamic SQL?
REPLACE didn't work for me when used with IN for some reason. I ended up using CHARINDEX
WHERE CHARINDEX( ',' + customerState + ',', ',' + #StateList + ',' ) > 0
For anyone attempting to use Dynamic SQL with a multi-valued parameter in the where clause AND use it to run an SSRS report, this is how I got around it...
create table #temp
(id, FName varchar(100), LName varchar(100))
declare #sqlQuery (nvarchar(max))
set #sqlQuery =
'insert into #temp
select
id,
FName,
LName
from dbo.table'
exec (#sqlQuery)
select
FName, LName
from #temp
where #temp.id in (#id) -- #id being an SSRS parameter!
drop table #temp
Granted, the problem with this query is that the dynamic SQL will select everything from dbo.table, and then the select from #temp is where the filter kicks in, so if there's a large amount of data - it's probably not so great. But... I got frustrated trying to get REPLACE to work, or any other solutions others had posted.
This takes care of the middle:
SET #StateList = REPLACE(#StateList, ',', ''',''')
Then quote the edges:
SET #WhereClause1 = #WhereClause1 + 'AND customerState IN (''' + #StateList + ''') '