I'm trying to write a PL/PGSQL function that takes delimited text as input and inserts rows into a table that includes points. I've got a test function that works OK:
rowsaz := string_to_array(input,'?');
INSERT INTO test (num1, num2, my_name)
SELECT * FROM unnest(string_to_array(rowsaz[1],',')::integer[],string_to_array(rowsaz[2],',')::integer[],string_to_array(rowsaz[3],',')::varchar[]);
return true;
So if you call the function with
SELECT myfunction('1,2,3?4,5,6?a,b,c')
Then you get a table like
1 4 a
2 5 b
3 6 c
But how do you do this when you have a table like
CREATE TABLE public.gps_points
(
id integer NOT NULL DEFAULT nextval('gps_id_seq'::regclass),
location geometry(Point,4326),
created_at timestamp with time zone DEFAULT CURRENT_TIMESTAMP,
user_id integer
)
An insert would look like
INSERT INTO gps_points (location, user_id) VALUES (ST_GeomFromText('POINT(-71.060316 48.432044)', 4326),2);
But this gets tricky with the unnest because you have to pass the coordinates into the ST_POINT function. Then I would call the function with:
SELECT myfunction('36.98,36.99,36.97?45.85,45.86,45.87?1,2,3')
I'm trying to do the following, but it won't work
insert into gps_points( geom, user_id)
select unnest( (ST_GeomFromText('POINT(string_to_array(rowsaz[1],',')::double precision[] string_to_array(rowsaz[2],',')::double precision[])', 4326),string_to_array(rowsaz[3],',')::double precision[]));
Somebody helped me get it:
insert into gps_points( geom, user_id)
select ST_SetSRID(ST_MakePoint(lng, lat), 4326), uid from unnest(string_to_array(rowsaz[1],',')::double precision[],string_to_array(rowsaz[2],',')::double precision[],string_to_array(rowsaz[3],',')::integer[]) as u(lng,lat,uid);
Related
I wanted to delete all records except the one with the highest value so I did
CREATE TABLE code (
id SERIAL,
name VARCHAR(255) NOT NULL ,
value int NOT NULL
);
INSERT INTO code (name,value) VALUES ('name',1);
INSERT INTO code (name,value) VALUES ('name',2);
INSERT INTO code (name,value) VALUES ('name',3);
INSERT INTO code (name,value) VALUES ('name1',3);
INSERT INTO code (name,value) VALUES ('name2',1);
INSERT INTO code (name,value) VALUES ('name2',3);
Example I want to delete all records except the one with the highest value on value column
I am expecting to get result as:
name 3
name1 3
name2 3
I tried doing
DELETE FROM code where value != (select MAX(value) value from code where count(code) > 1)
But I'm getting an error like:
ERROR: aggregate functions are not allowed in WHERE
LINE 1: ...value != (select MAX(value) value from code where count(code...
With everyone's idea and combine with this
SELECT dept, SUM(expense) FROM records
WHERE ROW(year, dept) IN (SELECT x, y FROM otherTable)
GROUP BY dept;
link
I was able to make the query I want
Demo
Your query makes no sense. Try this:
DELETE FROM code
where value <> (select value
FROM (SELECT count(*) AS count,
value
from code
GROUP BY value) AS q
ORDER BY count DESC
FETCH FIRST 1 ROWS ONLY);
The fast and easy solution would be:
BEGIN;
SELECT name,max(value) INTO temp t FROM code group by 1;
TRUNCATE code;
insert into code SELECT * FROM t;
END;
Or you can do like:
BEGIN;
DELETE FROM code USING (SELECT name,max(value) FROM code group by 1) a WHERE code.name=a.name AND code.value!=a.max;
END;
i have a table named Scoreboard which contains a field named as score which is an array containing values 27,56,78,12,89,77,34,23,90,87,33,55,30,67,76,87,56and i want to write a PostgreSQL procedure to fetch three categories
category 1 = top 10% values of the total no of values in array
category 2 = top 20% values of the total no of values in array
category 3 = top 30% values of the total no of values in array
and put it in an array in the same format i.e
[category 1 values,category 2 values,category 3 values]
smth like this should do:
t=# with p as (
with ntile as (
with v as (
select unnest('{27,56,78,12,89,77,34,23,90,87,33,55,30,67,76,87,56}'::int[]) a
)
select a,ntile(10) over(order by a desc)
from v
)
select distinct string_agg(a::text,',') over (partition by ntile),ntile
from ntile
where ntile <=3 order by ntile
)
select array_agg(concat('category ',ntile,' ',string_agg))
from p;
array_agg
------------------------------------------------------------
{"category 1 90,89","category 2 87,87","category 3 78,77"}
(1 row)
Time: 0.493 ms
I am assuming , you have a table with one column as id and another one is an array type. Based on assumption
I have created table as below and inserted two values to it.
create table test_array (id int, values int[]);
insert into test_array values(1 ,'{27,56,78,12,89,77,34,23,90,87,33,55,30,67,76,87,56}' );
insert into test_array values(2 ,'{72,65,84,21,98,77,43,32,9,78,41,66,3,76,67,88,56}' );
Below is function which is used to find category as mentioned by you. If you do not have any id column in your table
then you can add number by using window function hint: row_number().
create or replace function find_category() returns table(category text[]) as
$$
BEGIN
return query with unnestColumn as (
select id, unnest(values) as values, ntile(10) over(partition by id order by unnest(values) desc) as ntilenumber
from test_array
) ,groupedCategory as ( select id, ntilenumber, string_agg(values::text,',') as combinedvalues from unnestColumn
where
ntilenumber <= 3
group by id, ntilenumber )
select array_agg(concat('Categoty',ntilenumber, ' ', combinedvalues ))
from groupedCategory
group by id;
END;
$$
language 'plpgsql';
Execute below function to check output.
select * from find_category();
The end result of what I am after is a query that calls a function and that function returns a set of records that are in their own separate fields. I can do this but the results of the function are all in one field.
ie: http://i.stack.imgur.com/ETLCL.png and the results I am after are: http://i.stack.imgur.com/wqRQ9.png
Here's the code to create the table
CREATE TABLE tbl_1_hm
(
tbl_1_hm_id bigserial NOT NULL,
tbl_1_hm_f1 VARCHAR (250),
tbl_1_hm_f2 INTEGER,
CONSTRAINT tbl_1_hm PRIMARY KEY (tbl_1_hm_id)
)
-- do that for a few times to get some data
INSERT INTO tbl_1_hm (tbl_1_hm_f1, tbl_1_hm_f2)
VALUES ('hello', 1);
CREATE OR REPLACE FUNCTION proc_1_hm(id BIGINT)
RETURNS TABLE(tbl_1_hm_f1 VARCHAR (250), tbl_1_hm_f2 int AS $$
SELECT tbl_1_hm_f1, tbl_1_hm_f2
FROM tbl_1_hm
WHERE tbl_1_hm_id = id
$$ LANGUAGE SQL;
--And here is the current query I am running for my results:
SELECT t1.tbl_1_hm_id, proc_1_hm(t1.tbl_1_hm_id) AS t3
FROM tbl_1_hm AS t1
Thanks for having a read. Please if you want to haggle about the semantics of what I am doing by hitting the same table twice or my naming convention --> this is a simplified test.
When a function returns a set of records, you should treat it as a table source:
SELECT t1.tbl_1_hm_id, t3.*
FROM tbl_1_hm AS t1, proc_1_hm(t1.tbl_1_hm_id) AS t3;
Note that functions are implicitly using a LATERAL join (scroll down to sub-sections 4 and 5) so you can use fields from tables listed previously without having to specify an explicit JOIN condition.
Hello what is the easiest way to duplicate a DB record over the same table?
My problem is that the table where I am doing this has many column, like 100+, and I don't like how the solution looks like. Here is what I do (this is inside plpqsql function):
...
1. duplicate record
INSERT INTO history
(SELECT NEXTVAL('history_id_seq'), col_1, col_2, ... , col_100)
FROM history
WHERE history_id = 1234
ORDER BY datetime DESC
LIMIT 1)
RETURNING
history_id INTO new_history_id;
2. update some columns
UPDATE history
SET
col_5 = 'test_5',
col_23 = 'test_23',
datetime = CURRENT_TIMESTAMP
WHERE history_id = new_history_id;
Here are the problems I am attempting to solve
Listing all these 100+ columns looks lame
When new column is added eventually the function should be updated too
On separate DB instances the column order might differ, which would cause the function fail
I am not sure if I can list them once more (solving issue 3) like insert into <table> (<columns_list>) values (<query>) but then the query looks even uglier.
I would like to achieve something like 'insert into ', but this seems impossible the unique primary key constraint will raise a duplication error.
Any suggestions?
Thanks in advance for you time.
This isn't pretty or particularly optimized but there are a couple of ways to go about this. Ideally, you might want to do this all in an UPDATE trigger though you could implement a duplication function something like this:
-- create source table
CREATE TABLE history (history_id serial not null primary key, col_2 int, col_3 int, col_4 int, datetime timestamptz default now());
-- add some data
INSERT INTO history (col_2, col_3, col_4)
SELECT g, g * 10, g * 100 FROM generate_series(1, 100) AS g;
-- function to duplicate record
CREATE OR REPLACE FUNCTION fn_history_duplicate(p_history_id integer) RETURNS SETOF history AS
$BODY$
DECLARE
cols text;
insert_statement text;
BEGIN
-- build list of columns
SELECT array_to_string(array_agg(column_name::name), ',') INTO cols
FROM information_schema.columns
WHERE (table_schema, table_name) = ('public', 'history')
AND column_name <> 'history_id';
-- build insert statement
insert_statement := 'INSERT INTO history (' || cols || ') SELECT ' || cols || ' FROM history WHERE history_id = $1 RETURNING *';
-- execute statement
RETURN QUERY EXECUTE insert_statement USING p_history_id;
RETURN;
END;
$BODY$
LANGUAGE 'plpgsql';
-- test
SELECT * FROM fn_history_duplicate(1);
history_id | col_2 | col_3 | col_4 | datetime
------------+-------+-------+-------+-------------------------------
101 | 1 | 10 | 100 | 2013-04-15 14:56:11.131507+00
(1 row)
As I noted in my original comment, you might also take a look at the colnames extension as an alternative to querying the information schema.
You don't need the update anyway, you can supply the constant values directly in the SELECT statement:
INSERT INTO history
SELECT NEXTVAL('history_id_seq'),
col_1,
col_2,
col_3,
col_4,
'test_5',
...
'test_23',
...,
col_100
FROM history
WHERE history_sid = 1234
ORDER BY datetime DESC
LIMIT 1
RETURNING history_sid INTO new_history_sid;
I'm using PostgreSQL 9.0 and I have a table with just an artificial key (auto-incrementing sequence) and another unique key. (Yes, there is a reason for this table. :)) I want to look up an ID by the other key or, if it doesn't exist, insert it:
SELECT id
FROM mytable
WHERE other_key = 'SOMETHING'
Then, if no match:
INSERT INTO mytable (other_key)
VALUES ('SOMETHING')
RETURNING id
The question: is it possible to save a round-trip to the DB by doing both of these in one statement? I can insert the row if it doesn't exist like this:
INSERT INTO mytable (other_key)
SELECT 'SOMETHING'
WHERE NOT EXISTS (SELECT * FROM mytable WHERE other_key = 'SOMETHING')
RETURNING id
... but that doesn't give the ID of an existing row. Any ideas? There is a unique constraint on other_key, if that helps.
Have you tried to union it?
Edit - this requires Postgres 9.1:
create table mytable (id serial primary key, other_key varchar not null unique);
WITH new_row AS (
INSERT INTO mytable (other_key)
SELECT 'SOMETHING'
WHERE NOT EXISTS (SELECT * FROM mytable WHERE other_key = 'SOMETHING')
RETURNING *
)
SELECT * FROM new_row
UNION
SELECT * FROM mytable WHERE other_key = 'SOMETHING';
results in:
id | other_key
----+-----------
1 | SOMETHING
(1 row)
No, there is no special SQL syntax that allows you to do select or insert. You can do what Ilia mentions and create a sproc, which means it will not do a round trip fromt he client to server, but it will still result in two queries (three actually, if you count the sproc itself).
using 9.5 i successfully tried this
based on Denis de Bernardy's answer
only 1 parameter
no union
no stored procedure
atomic, thus no concurrency problems (i think...)
The Query:
WITH neworexisting AS (
INSERT INTO mytable(other_key) VALUES('hello 2')
ON CONFLICT(other_key) DO UPDATE SET existed=true -- need some update to return sth
RETURNING *
)
SELECT * FROM neworexisting
first call:
id|other_key|created |existed|
--|---------|-------------------|-------|
6|hello 1 |2019-09-11 11:39:29|false |
second call:
id|other_key|created |existed|
--|---------|-------------------|-------|
6|hello 1 |2019-09-11 11:39:29|true |
First create your table ;-)
CREATE TABLE mytable (
id serial NOT NULL,
other_key text NOT NULL,
created timestamptz NOT NULL DEFAULT now(),
existed bool NOT NULL DEFAULT false,
CONSTRAINT mytable_pk PRIMARY KEY (id),
CONSTRAINT mytable_uniq UNIQUE (other_key) --needed for on conflict
);
you can use a stored procedure
IF (SELECT id FROM mytable WHERE other_key = 'SOMETHING' LIMIT 1) < 0 THEN
INSERT INTO mytable (other_key) VALUES ('SOMETHING')
END IF
I have an alternative to Denis answer, that I think is less database-intensive, although a bit more complex:
create table mytable (id serial primary key, other_key varchar not null unique);
WITH table_sel AS (
SELECT id
FROM mytable
WHERE other_key = 'test'
UNION
SELECT NULL AS id
ORDER BY id NULLS LAST
LIMIT 1
), table_ins AS (
INSERT INTO mytable (id, other_key)
SELECT
COALESCE(id, NEXTVAL('mytable_id_seq'::REGCLASS)),
'test'
FROM table_sel
ON CONFLICT (id) DO NOTHING
RETURNING id
)
SELECT * FROM table_ins
UNION ALL
SELECT * FROM table_sel
WHERE id IS NOT NULL;
In table_sel CTE I'm looking for the right row. If I don't find it, I assure that table_sel returns at least one row, with a union with a SELECT NULL.
In table_ins CTE I try to insert the same row I was looking for earlier. COALESCE(id, NEXTVAL('mytable_id_seq'::REGCLASS)) is saying: id could be defined, if so, use it; whereas if id is null, increment the sequence on id and use this new value to insert a row. The ON CONFLICT clause assure
that if id is already in mytable I don't insert anything.
At the end I put everything together with a UNION between table_ins and table_sel, so that I'm sure to take my sweet id value and execute both CTE.
This query needs to search for the value other_key only once, and is a "search this value" not a "check if this value not exists in the table", that is very heavy; in Denis alternative you use other_key in both types of searches. In my query you "check if a value not exists" only on id that is a integer primary key, that, for construction, is fast.
Minor tweak a decade late to Denis's excellent answer:
-- Create the table with a unique constraint
CREATE TABLE mytable (
id serial PRIMARY KEY
, other_key varchar NOT NULL UNIQUE
);
WITH new_row AS (
-- Only insert when we don't find anything, avoiding a table lock if
-- possible.
INSERT INTO mytable ( other_key )
SELECT 'SOMETHING'
WHERE NOT EXISTS (
SELECT *
FROM mytable
WHERE other_key = 'SOMETHING'
)
RETURNING *
)
(
-- This comes first in the UNION ALL since it'll almost certainly be
-- in the query cache. Marginally slower for the insert case, but also
-- marginally faster for the much more common read-only case.
SELECT *
FROM mytable
WHERE other_key = 'SOMETHING'
-- Don't check for duplicates to be removed
UNION ALL
-- If we reach this point in iteration, we needed to do the INSERT and
-- lock after all.
SELECT *
FROM new_row
) LIMIT 1 -- Just return whatever comes first in the results and allow
-- the query engine to cut processing short for the INSERT
-- calculation.
;
The UNION ALL tells the planner it doesn't have to collect results for de-duplication. The LIMIT 1 at the end allows the planner to short-circuit further processing/iteration once it knows there's an answer available.
NOTE: There is a race condition present here and in the original answer. If the entry does not already exist, the INSERT will fail with a unique constraint violation. The error can be suppressed with ON CONFLICT DO NOTHING, but the query will return an empty set instead of the new row. This is a difficult problem because getting that info from another transaction would violate the I in ACID.