How to combine multiple jsonb columns into a generated column
For single column the following statement works
alter table r
add column txt TEXT GENERATED ALWAYS as (
doc->'contact'->>'name'
) stored;
I tried the following statement without success,
SQL Error [42883]: ERROR: operator does not exist: text -> unknown
alter table r
add column txt TEXT GENERATED ALWAYS as (
doc->'contact'->>'name' || ' ' || doc->'contact'->>'phone'
) stored;
Answer:
alter table r
add column txt TEXT GENERATED ALWAYS as (
(doc->'contact'->>'name') || ' ' || (doc->'contact'->>'phone')
) stored;
I am creating following table in PostgreSQL 11.0
create table rshar.drug_source as
select * from dblink('dbname=aa port=aa host=aa user=read password=aa',
'
select
*
from
prod.drug_source
;
'
) as x ( id BIGINT,
db CHARACTER VARYING,
original_id CHARACTER VARYING,
mapped_by CHARACTER VARYING,
dataset_version_id BIGINT,
CONSTRAINT drugsource_fk1 FOREIGN KEY(id, dataset_version_id) REFERENCES drug_tbl (id,
dataset_version_id)
;
I am getting following error:
[Code: 0, SQL State: 42601] ERROR: syntax error at or near "CONSTRAINT"
Position: 547 [Script position: 4147 - 4157]
I am not able to get what is the syntax error near "CONSTRAINT"
Any help is highly appreciated.
You cannot create a table with constraints and fill it in a single SQL statement. Do it in two steps:
CREATE TABLE rshar.drug_source (
col1 type1,
...
CONSTRAINT ...
);
INSERT INTO rshar.drug_source
SELECT * from dblink(...);
I have a task is change the DB2 sql to Postgres sql. It has a problem in XML handle.
I tried to used this sql
SELECT xmlagg( ', <foo>abc</foo>') as YM
FROM TEST_Table"
It's working fine.
But
SELECT xmlagg( concat(', ', TRIM(CATEGORY))) as YM
from TEST_Table
is not working.
Error msg : SQL Error [42883]: ERROR: function xmlagg(text) does not exist
Hint: No function matches the given name and argument types. You might need to add explicit type casts.
My question is how to change the below DB2sql to Postgres sql ?
**SELECT substr( xmlserialize( xmlagg( xmltext( concat( ', ', TRIM(NM_TEXT) ) ) ) as varchar( 1024 ) ), 3, 128 ) as YM
FROM TEST_Table**
I'm trying to insert point geometry values and other data from one table to another table.
-- create tables
create table bh_tmp (bh_id integer, bh_name varchar
, easting decimal, northing decimal, ground_mod decimal);
create table bh (name varchar);
SELECT AddGeometryColumn('bh', 'bh_geom', 27700, 'POINT',3);
-- popualte bh_tmp
insert into bh_tmp values
(1,'C5',542945.0,180846.0,3.947),
(3,'B24',542850.0,180850.0,4.020),
(4,'B26',543020.0,180850.0,4.020);
-- populate bh from bh_tmp
insert into bh(name, bh_geom) SELECT
bh_name,
CONCAT($$ST_GeomFromText('POINT($$, Easting, ' ', Northing, ' '
, Ground_mOD, $$)', 27700)$$);
FROM bh_tmp;
Gives this error:
ERROR: parse error - invalid geometry
SQL state: XX000
Hint: "ST" <-- parse error at position 2 within geometry
I can't see anything wrong with the ST_GeomFromText string that I've specified. But I can populate table bh if I insert rows 'manually', e.g.:
INSERT INTO bh (name, bh_geom)
VALUES ('C5' ST_GeomFromText('POINT(542945.0 180846.0 3.947)', 27700));
What am I doing wrong?
First of all, there is a misplaced semicolon after CONCAT(...);
And you can't concatenate the function name itself into the string:
INSERT INTO bh(name, bh_geom)
SELECT bh_name
, ST_GeomFromText('POINT(' || concat_ws(' ', easting, northing, ground_mod) || ')'
, 27700)
FROM bh_tmp;
Or, since you have values already (not text), you could use ST_MakePoint() and ST_SetSRID():
ST_SetSRID(ST_MakePoint(easting, northing, ground_mod), 27700)
Should be faster.
Npgsql parameterized query output incompatible with PostGIS
You're getting that error because the output of the CONCAT function is text, and your bh_geom column is geometry, so you're trying to insert text into geometry. This will work:
INSERT INTO bh(name, bh_geom) SELECT
bh_name,
ST_GeomFromText('POINT('
|| easting|| ' '
|| Northing
|| ' '
|| Ground_mOD
|| ')', 27700)
FROM bh_tmp;
-- Given a CSV string like this:
declare #roles varchar(800)
select #roles = 'Pub,RegUser,ServiceAdmin'
-- Question: How to get roles into a table view like this:
select 'Pub'
union
select 'RegUser'
union
select 'ServiceAdmin'
After posting this, I started playing with some dynamic SQL. This seems to work, but seems like there might be some security risks by using dynamic SQL - thoughts on this?
declare #rolesSql varchar(800)
select #rolesSql = 'select ''' + replace(#roles, ',', ''' union select ''') + ''''
exec(#rolesSql)
If you're working with SQL Server compatibility level 130 then the STRING_SPLIT function is now the most succinct method available.
Reference link: https://msdn.microsoft.com/en-gb/library/mt684588.aspx
Usage:
SELECT * FROM string_split('Pub,RegUser,ServiceAdmin',',')
RESULT:
value
-----------
Pub
RegUser
ServiceAdmin
See my answer from here
But basically you would:
Create this function in your DB:
CREATE FUNCTION dbo.Split(#origString varchar(max), #Delimiter char(1))
returns #temptable TABLE (items varchar(max))
as
begin
declare #idx int
declare #split varchar(max)
select #idx = 1
if len(#origString )<1 or #origString is null return
while #idx!= 0
begin
set #idx = charindex(#Delimiter,#origString)
if #idx!=0
set #split= left(#origString,#idx - 1)
else
set #split= #origString
if(len(#split)>0)
insert into #temptable(Items) values(#split)
set #origString= right(#origString,len(#origString) - #idx)
if len(#origString) = 0 break
end
return
end
and then call the function and pass in the string you want to split.
Select * From dbo.Split(#roles, ',')
Here's a thorough discussion of your options:
Arrays and Lists in SQL Server
What i do in this case is just using some string replace to convert it to json and open the json like a table. May not be suitable for every use case but it is very simple to get running and works with strings and files. With files you just need to watch your line break character, mostly i find it to be "Char(13)+Char(10)"
declare #myCSV nvarchar(MAX)= N'"Id";"Duration";"PosX";"PosY"
"•P001";223;-30;35
"•P002";248;-28;35
"•P003";235;-26;35'
--CSV to JSON
--convert to json by replacing some stuff
declare #myJson nvarchar(MAX)= '[['+ replace(#myCSV, Char(13)+Char(10), '],[' ) +']]'
set #myJson = replace(#myJson, ';',',') -- Optional: ensure coma delimiters for json if the current delimiter differs
-- set #myJson = replace(#myJson, ',,',',null,') -- Optional: empty in between
-- set #myJson = replace(#myJson, ',]',',null]') -- Optional: empty before linebreak
SELECT
ROW_NUMBER() OVER (ORDER BY (SELECT 0))-1 AS LineNumber, *
FROM OPENJSON( #myJson )
with (
col0 varchar(255) '$[0]'
,col1 varchar(255) '$[1]'
,col2 varchar(255) '$[2]'
,col3 varchar(255) '$[3]'
,col4 varchar(255) '$[4]'
,col5 varchar(255) '$[5]'
,col6 varchar(255) '$[6]'
,col7 varchar(255) '$[7]'
,col8 varchar(255) '$[8]'
,col9 varchar(255) '$[9]'
--any name column count is possible
) csv
order by (SELECT 0) OFFSET 1 ROWS --hide header row
Using SQL Server's built in XML parsing is also an option. Of course, this glosses over all the nuances of an RFC-4180 compliant CSV.
-- Given a CSV string like this:
declare #roles varchar(800)
select #roles = 'Pub,RegUser,ServiceAdmin'
-- Here's the XML way
select split.csv.value('.', 'varchar(100)') as value
from (
select cast('<x>' + replace(#roles, ',', '</x><x>') + '</x>' as xml) as data
) as csv
cross apply data.nodes('/x') as split(csv)
If you are using SQL 2016+, using string_split is better, but this is a common way to do this prior to SQL 2016.
Using BULK INSERT you can import a csv file into your sql table -
http://blog.sqlauthority.com/2008/02/06/sql-server-import-csv-file-into-sql-server-using-bulk-insert-load-comma-delimited-file-into-sql-server/
Even the accepted answer is working fine. but I got this function much faster even for thousands of record. create below function and use.
IF EXISTS (
SELECT 1
FROM Information_schema.Routines
WHERE Specific_schema = 'dbo'
AND specific_name = 'FN_CSVToStringListTable'
AND Routine_Type = 'FUNCTION'
)
BEGIN
DROP FUNCTION [dbo].[FN_CSVToStringListTable]
END
GO
CREATE FUNCTION [dbo].[FN_CSVToStringListTable] (#InStr VARCHAR(MAX))
RETURNS #TempTab TABLE (Id NVARCHAR(max) NOT NULL)
AS
BEGIN
;-- Ensure input ends with comma
SET #InStr = REPLACE(#InStr + ',', ',,', ',')
DECLARE #SP INT
DECLARE #VALUE VARCHAR(1000)
WHILE PATINDEX('%,%', #INSTR) <> 0
BEGIN
SELECT #SP = PATINDEX('%,%', #INSTR)
SELECT #VALUE = LEFT(#INSTR, #SP - 1)
SELECT #INSTR = STUFF(#INSTR, 1, #SP, '')
INSERT INTO #TempTab (Id)
VALUES (#VALUE)
END
RETURN
END
GO
---Test like this.
declare #v as NVARCHAR(max) = N'asdf,,as34df,234df,fs,,34v,5fghwer,56gfg,';
SELECT Id FROM dbo.FN_CSVToStringListTable(#v)
I was about you use the solution mentioned in the accepted answer, but doing more research led me to use Table Value Types:
These are far more efficient and you don't need a TVF (Table valued function) just to create a table from csv. You can use it directly in your scripts or pass that to a stored procedure as a Table Value Parameter. The Type can be created as :
CREATE TYPE [UniqueIdentifiers] AS TABLE(
[Id] [varchar](20) NOT NULL
)