PostgreSQL, SQL state: 42601 - postgresql

I want to insert into a table (circuit) using a select which takes values from 2 tables (segment and wgs). My query:
INSERT INTO circuit (id_circuit, description, date_start, date_end, speed,
length, duration)
SELECT (seg.id_segment, cir.nomcircuit, seg.date_start, seg.date_end, seg.speed_average,
cir.shape_leng, (seg.date_end - seg.date_start))
FROM segment seg, wgs cir where seg.id = 13077
My Tables: circuit:
CREATE TABLE circuit
(
id serial NOT NULL,
id_circuit integer,
description character varying(50),
date_start time without time zone,
date_end time without time zone,
speed double precision,
length double precision,
duration double precision,
CONSTRAINT circuit_pkey PRIMARY KEY (id)
)
segment:
CREATE TABLE segment
(
id serial NOT NULL,
id_segment integer,
date_start timestamp without time zone,
date_end timestamp without time zone,
speed_average double precision,
mt_identity character varying,
truck_type character varying,
CONSTRAINT segment_pkey PRIMARY KEY (id)
)
wgs:
CREATE TABLE wgs
(
id serial NOT NULL,
nomcircuit character varying(50),
shape_leng numeric,
CONSTRAINT wgs_pkey PRIMARY KEY (id)
)
But when I run my query, this error comes:
ERROR: INSERT has more target columns than expressions
LINE 1: INSERT INTO circuit (id_circuit, description, dat...
^
HINT: The insertion source is a row expression containing the same number of columns
expected by the INSERT. Did you accidentally use extra parentheses?
As far I can see, I do not have extra parentheses, I double checked the columns data type and made sure they match and various tries, but I still don't get why the error comes.
PS: the 13077 is just to try it out with one value I'm sure I have.

This constructs an anonymous composite value:
select (1, 'a');
For example:
=> select (1, 'a');
row
-------
(1,a)
(1 row)
=> select row(1, 'a');
row
-------
(1,a)
(1 row)
Note that that is a single composite value, not multiple values.
From the fine manual:
8.16.2. Composite Value Input
To write a composite value as a literal constant, enclose the field values within parentheses and separate them by commas. You can put double quotes around any field value, and must do so if it contains commas or parentheses.
[...]
The ROW expression syntax can also be used to construct composite values. In most cases this is considerably simpler to use than the string-literal syntax since you don't have to worry about multiple layers of quoting. We already used this method above:
ROW('fuzzy dice', 42, 1.99)
ROW('', 42, NULL)
The ROW keyword is actually optional as long as you have more than one field in the expression, so these can simplify to:
('fuzzy dice', 42, 1.99)
('', 42, NULL)
The Row Constructors section might also be of interest.
When you say this:
INSERT INTO circuit (id_circuit, description, date_start, date_end, speed,
length, duration)
SELECT (...)
FROM segment seg, wgs cir where seg.id = 13077
your SELECT clause only has one column as the whole (...) expression represents a single value. The solution is to simply drop those parentheses:
INSERT INTO circuit (id_circuit, description, date_start, date_end, speed, length, duration)
SELECT seg.id_segment, ..., (seg.date_end - seg.date_start)
FROM segment seg, wgs cir where seg.id = 13077

Related

Function to insert data into different tables

I have three tables in PostgreSQL:
CREATE TABLE organization (id int, name text, parent_id int);
CREATE TABLE staff (id int, name text, family text, organization_id int);
CREATE TABLE clock(id int, staff_id int, Date date, Time time);
I need a function that gets all the fields of these tables as inputs (8 on total) and then inserts these inputs into appropriate fields of the tables
Here is my code:
CREATE FUNCTION insert_into_tables(org_name character varying(50), org_PID int, person_name character varying(50),_family character varying(50), org_id int, staff_id int,_date date, _time time without time zone)
RETURNS void AS $$
BEGIN
INSERT INTO "Org".organisation("Name", "PID")
VALUES ($1, $2);
INSERT INTO "Org".staff("Name", "Family", "Organization_id")
VALUES ($3, $4, $5);
INSERT INTO "Org"."Clock"("Staff_Id", "Date", "Time")
VALUES ($6, $7, $8);
END;
$$ LANGUAGE plpgsql;
select * from insert_into_tables('SASAD',9,'mamad','Imani',2,2,1397-10-22,'08:26:47')
But no data is inserted. I get the error:
ERROR: function insert_into_tables(unknown, integer, unknown, unknown, integer, integer, integer, unknown) does not exist
LINE 17: select * from insert_into_tables('SASAD',9,'mamad','Imani',2... ^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
Where did i go wrong?
That's because the 2nd last parameter is declared as date, not int. You forgot the single quotes:
select * from insert_into_tables('SASAD',9,'mamad','Imani',2,2,'1397-10-22','08:26:47');
Without single quotes, this is interpreted as subtraction between 3 integer constants, resulting in an integer: 1397-10-22 = 1365.
Also fix your identifiers: double-quoting preserves upper-case letters, so "Name" is distinct from name etc. See:
Are PostgreSQL column names case-sensitive?

Does Character Varying Array apply precision restrictions to each element within the array?

I'm trying to implement a character varying array in this format:
<column-name> character varying(7)[ ]
I wanted to create an array of character varying that still upholds the 7 character precision.
Will Postgres do that for me?
Yes, PostgreSQL will do that as you wish, as you can easily verify:
CREATE TABLE vararr(
id integer PRIMARY KEY,
v varchar(7)[]
);
INSERT INTO vararr VALUES (1, '{abc,def,ghi}');
INSERT INTO vararr VALUES (2, '{abcabcabc,def,ghi}');
ERROR: value too long for type character varying(7)

Physical size of int2, int4, int8 in PostgreSQL

I am not understand differents between storage size of INTs (all this types has fixed size)
In official manual I see this description:
The types smallint, integer, and bigint store whole numbers, that is,
numbers without fractional components, of various ranges. Attempts to
store values outside of the allowed range will result in an error.
The type integer is the common choice, as it offers the best balance
between range, storage size, and performance. The smallint type is
generally only used if disk space is at a premium. The bigint type is
designed to be used when the range of the integer type is
insufficient.
SQL only specifies the integer types integer (or int), smallint, and
bigint. The type names int2, int4, and int8 are extensions, which are
also used by some other SQL database systems.
However simple test shows that change type of column do not change table size
create table test_big_table_int (
f_int integer
);
INSERT INTO test_big_table_int (f_int )
SELECT ceil(random() * 1000)
FROM generate_series(1,1000000);
SELECT
pg_size_pretty(pg_total_relation_size(relid)) As "Size_of_table"
FROM pg_catalog.pg_statio_user_tables
where relname = 'test_big_table_int';
--"35 MB";
alter table test_big_table_int ALTER COLUMN f_int TYPE bigint;
SELECT
pg_size_pretty(pg_total_relation_size(relid)) As "Size_of_table"
FROM pg_catalog.pg_statio_user_tables
where relname = 'test_big_table_int';
--"35 MB";
alter table test_big_table_int ALTER COLUMN f_int TYPE smallint;
SELECT
pg_size_pretty(pg_total_relation_size(relid)) As "Size_of_table"
FROM pg_catalog.pg_statio_user_tables
where relname = 'test_big_table_int';
--"35 MB";"0 bytes"
Everytime I get size of my table - 35MB. So where the profit to use integer or smallint insthead int8?
And second question - why postgee is doing rewrite tuple while change type of int (int2<->int4<->int8)?

How to create a pageable function in PostgreSQL

I have two tables: event and location
CREATE TABLE location
(
location_id bigint NOT NULL,
version bigint NOT NULL,
active boolean NOT NULL,
created timestamp without time zone NOT NULL,
latitude double precision NOT NULL,
longitude double precision NOT NULL,
updated timestamp without time zone,
CONSTRAINT location_pkey PRIMARY KEY (location_id)
)
CREATE TABLE event
(
event_id bigint NOT NULL,
version bigint NOT NULL,
active boolean NOT NULL,
created timestamp without time zone NOT NULL,
end_date date,
entry_fee numeric(19,2),
location_id bigint NOT NULL,
organizer_id bigint NOT NULL,
start_date date NOT NULL,
timetable_id bigint,
updated timestamp without time zone,
CONSTRAINT event_pkey PRIMARY KEY (event_id),
CONSTRAINT fk_organizer FOREIGN KEY (organizer_id)
REFERENCES "user" (user_id) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION,
CONSTRAINT fk_timetable FOREIGN KEY (timetable_id)
REFERENCES timetable (timetable_id) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION,
CONSTRAINT fk_location FOREIGN KEY (location_id)
REFERENCES location (location_id) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION
)
Other tables are of lesser to no importance so they will not be shown (unless explicitly asked).
And for those tables, using cube and earthdistance pgsql extensions I've created the following function for finding all event_ids within a certain radius of a certain point.
CREATE OR REPLACE FUNCTION eventidswithinradius(
lat double precision,
lng double precision,
radius double precision)
RETURNS SETOF bigint AS
$BODY$
BEGIN
RETURN QUERY SELECT event.event_id
FROM event
INNER JOIN location ON location.location_id = event.location_id
WHERE earth_box( ll_to_earth(lat, lng), radius) #> ll_to_earth(location.latitude, location.longitude);
END;
$BODY$
And this works as expected. Now I wish to make it pageable, and am stuck on how to get all the necessary values (the table with paged contents and total count).
So far I've created this:
CREATE OR REPLACE FUNCTION pagedeventidswithinradius(
IN lat double precision,
IN lng double precision,
IN radius double precision,
IN page_size integer,
IN page_offset integer)
RETURNS TABLE( total_size integer , event_id bigint ) AS
$BODY$
DECLARE total integer;
BEGIN
SELECT COUNT(location.*) INTO total FROM location WHERE earth_box( ll_to_earth(lat, lng), radius) #> ll_to_earth(location.latitude, location.longitude);
RETURN QUERY SELECT total, event.event_id as event_id
FROM event
INNER JOIN location ON location.location_id = event.location_id
WHERE earth_box( ll_to_earth(lat, lng), radius) #> ll_to_earth(location.latitude, location.longitude)
ORDER BY event_id
LIMIT page_size OFFSET page_offset;
END;
$BODY$
Here count is called only once and stored in a variable since I assumed that if I placed COUNT into the return query itself it would be called for each row.
This kind of works, but it is difficult to parse on the back-end since the result is in the form of (count, event_id), also count is needlessly repeated over all result rows. I was hoping I could simply add total as an OUT param and have the function return the table and fill the OUT variable with total count, however it seems this is not allowed. I can always have the count be a separate function but I was wondering if there is a better way to approach this issue?
No, there isn't really a better option. You want two different types of quantities so you need two queries. You can improve upon your function, however:
CREATE FUNCTION eventidswithinradius(lat float8, long float8, radius float8) RETURNS SETOF bigint AS $BODY$
SELECT event.event_id
FROM event
JOIN location l USING (location_id)
WHERE earth_box(ll_to_earth(lat, lng), radius) #> ll_to_earth(l.latitude, l.longitude);
$BODY$ LANGUAGE sql STRICT;
As a LANGUAGE sql function it is more efficient than as a PL/pgSQL function, plus you can do your paging on the outside:
SELECT *
FROM eventidswithinradius(121.056, 14.582, 3000)
LIMIT 15 OFFSET 1;
Internally the query planner will resolve the function call to its underlying query and apply the paging directly to that level.
Get the total with the obvious:
SELECT count(id)
FROM eventidswithinradius(121.056, 14.582, 3000);

Is it possible to create a type that is an enum of some interval?

How to make a new TYPE with ENUM that takes all of the members of an interval? Like:
CREATE TYPE letters [a...z]
There is no built-in syntax for this, but you can do it with dynamic SQL:
DO
$$
BEGIN
EXECUTE
(
SELECT 'CREATE TYPE a2z AS ENUM ('''
|| string_agg(chr(ascii('a') + g), ''',''')
|| ''')'
FROM generate_series(0,25) g
);
END
$$;
This builds and executes a statement of the form:
CREATE TYPE a2z AS ENUM ('a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z')
This depends on your locale, but I think that all locales have a-z in a continuous range, which is all that's needed for this. Tested with an UTF-8 locale.
Alternative for anything that won't easily fit into an ENUM
For long lists of values, values that tend to change or are not as simple as the example or for special data types etc. consider creating a small look-up table instead and use a foreign key constraint to it. Example for a selection of dates:
CREATE TABLE my_date (my_date date PRIMARY KEY);
INSERT INTO my_date(my_date) VALUES ('2015-02-03'), ...;
CREATE TABLE foo (
foo_id serial PRIMARY KEY
, my_date date REFERENCES my_date ON UPDATE CASCADE
--, more columns ...
);