Hi guys i have little problem, i have tables like those
CREATE TABLE client(
regon VARCHAR NOT NULL,
title VARCHAR,
phone VARCHAR,
PRIMARY KEY(regon));
CREATE TABLE commodity(
id_com INT NOT NULL,
title VARCHAR,
PRIMARY KEY(id_com));
CREATE TABLE supply(
regon VARCHAR NOT NULL REFERENCES klient(regon),
id_supply INT NOT NULL,
id_com INT NOT NULLREFERENCES commodity(id_com),
quantity INT,
price DEC(5,2),
PRIMARY KEY(regon, id_supply, id_com));
and i have to create function which would returns value of all supplies (qantity*price)
and i made function like this:
CREATE OR REPLACE FUNCTION value1(out id int, out war double precision)as $$
select (quantity*price) as value from supply;
$$
language 'plpgsql';
but it only shows the first supply with id of first commodity with its value but not all commodities
maybe
you know how to do this?
thanks
Change this select (quantity*price) as value from supply;
to this:
select sum(quantity*price) from supply
group by id_com
order by id_com
I had to do query like this:
select sum(quantity*price) from supply
group by id_com,quantity,price
order by id_com
and it is also showing me id_com double times like i have two commodities with id 1 and one is 400 and second is 30, and i think i should maybe sum this up
Related
i will import data csv to postgresql via pgAdmin 4. But, there are problem
ERROR: invalid input syntax for type integer: ""
CONTEXT: COPY films, line 1, column gross: ""
i understand about the error that is line 1 column gross there is null value and in some other columns there are also null values. My questions, how to import file csv but in the that file there is null value. I've been search in google but not found similar my case.
CREATE TABLE public.films
(
id int,
title varchar,
release_year float4,
country varchar,
duration float4,
language varchar,
certification varchar,
gross int,
budget int
);
And i try in this code below, but failed
CREATE TABLE public.films
(
id int,
title varchar,
release_year float4 null,
country varchar null,
duration float4 null,
language varchar null,
certification varchar null,
gross float4 null,
budget float4 null
);
error message in image
I've searched on google and on the stackoverflow forums. I hope that someone will help solve my problem
There is no difference between the two table definitions. A column accepts NULL by default.
The issue is not a NULL value but an empty string:
select ''::integer;
ERROR: invalid input syntax for type integer: ""
LINE 1: select ''::integer;
select null::integer;
int4
------
NULL
Create a staging table that has data type of varchar for the fields that are now integer. Load the data into that table. Then modify the empty string data that will be integer using something like:
update table set gross = nullif(trim(gross), '');
Then move the data to the production table.
This is not a pgAdmin4 issue it is a data issue. Working in psql because it is easier to follow:
CREATE TABLE public.films_text
(
id varchar,
title varchar,
release_year varchar,
country varchar,
duration varchar,
language varchar,
certification varchar,
gross varchar,
budget varchar
);
\copy films_text from '~/Downloads/films.csv' with csv
COPY 4968
CREATE TABLE public.films
(
id int,
title varchar,
release_year float4,
country varchar,
duration float4,
language varchar,
certification varchar,
gross int,
budget int
);
-- Below done because of this value 12215500000 in budget column
alter table films alter COLUMN budget type int8;
INSERT INTO films
SELECT
id::int,
title,
nullif (trim(release_year), '')::real, country, nullif(trim(duration), '')::real,
LANGUAGE,
certification,
nullif (trim(gross), '')::float, nullif(trim(budget), '')::float
FROM
films_text;
INSERT 0 4968
It worked for me:
https://learnsql.com/blog/how-to-import-csv-to-postgresql/
a small workaround but it works
I created a table
I added headers to csv file
Right click on the newly created table-> Import/export data, select csv file to upload, go to tab2 - select Header and it should work
Below is my table structure for sold_quantity (Migration File)
alter table public.invoice_item add column sold_quantity int4 default 1;
Below is the function for execution
CREATE OR REPLACE FUNCTION sold_quantity()
RETURNS TABLE(
invoiceid BIGINT,
itemid BIGINT,
sum_sold_quantity INT)
AS $$
BEGIN
RETURN QUERY SELECT
invoice_id as invoiceid, item_id as itemid, sum(sold_quantity) as
sum_sold_quantity
FROM
invoice_item
WHERE
status='sold'
GROUP BY
invoice_id, item_id;
END; $$
What is the wrong in my code, Please help me solve this Error
Returned type bigint does not match expected type integer in column 3
sum() returns a bigint, not necessarily the type of the column that is being summed.
If you are 100% sure your sum never exceeds the range for an integer, you can fix this using a cast in your query: sum(sold_quantity)::int as sum_sold_quantity
But it would be better to adjust the signature of the function:
CREATE OR REPLACE FUNCTION sold_quantity()
RETURNS TABLE(
invoiceid BIGINT,
itemid BIGINT,
sum_sold_quantity BIGINT)
I'm using postgreql and i have two tables and i want to execute two select queries on them. the data that returning from each select are varying!
the data that returns from first table is:
id integer, first_name varchar, last_name varchar, email varchar, company varchar,positions varchar,address varchar,phone varchar
and the second table return:
group_contact_id integer, contact_id integer, group_id integer
i want to do it in a function like this:
create function findcontactbyid(id integer) returns table (id integer, first_name varchar, last_name varchar, email varchar, company varchar,positions varchar,address varchar,phone varchar, group_contact_id integer, contact_id integer, group_id integer) as $$
select * from cms_contact where id = $1
UNION ALL
select * from cms_groups_contacts where contact_id = $1
$$ language 'sql'
but i get errors
$1 mentions to (id integer) and it exists in both tables
I rather wonder about your function! What do you really need here.
Comeback to the issues:
I think it will raise an error because PostgreSQL have an ambiguous between id (INPUT), id (RETURN) and id of tables in query. So, if you can, you can named it different to each other and then check it again.
The second think, I have some worry about the UNION QUERY. In my knowledge, when you want to UNION 2 query to each other, the result of both query must be the same column and the same type of column.
So, I think after you fix the first issue, the function is also can not run. But you can try first.
I have a table created like
CREATE TABLE data
(value1 smallint references labels,
value2 smallint references labels,
value3 smallint references labels,
otherdata varchar(32)
);
and a second 'label holding' table created like
CREATE TABLE labels (id serial primary key, name varchar(32));
The rationale behind it is that value1-3 are a very limited set of strings (6 options) and it seems inefficient to enter them directly in the data table as varchar types. On the other hand these do occasionally change, which makes enum types unsuitable.
My question is, how can I execute a single query such that instead of the label IDs I get the relevant labels?
I looked at creating a function for it and stumbled at the point where I needed to pass the label holding table name to the function (there are several such (label holding) tables across the schema). Do I need to create a function per label table to avoid that?
create or replace function translate
(ref_id smallint,reference_table regclass) returns varchar(128) as
$$
begin
select name from reference_table where id = ref_id;
return name;
end;
$$
language plpgsql;
And then do
select
translate(value1, labels) as foo,
translate(value2, labels) as bar
from data;
This however errors out with
ERROR: relation "reference_table" does not exist
All suggestions welcome - at this point a can still alter just about anything...
CREATE TABLE labels
( id smallserial primary key
, name varchar(32) UNIQUE -- <<-- might want this, too
);
CREATE TABLE data
( value1 smallint NOT NULL REFERENCES labels(id) -- <<-- here
, value2 smallint NOT NULL REFERENCES labels(id)
, value3 smallint NOT NULL REFERENCES labels(id)
, otherdata varchar(32)
, PRIMARY KEY (value1,value2,value3) -- <<-- added primary key here
);
-- No need for a function here.
-- For small sizes of the `labels` table, the query below will always
-- result in hash-joins to perform the lookups.
SELECT l1.name AS name1, l2.name AS name2, l3.name AS name3
, d.otherdata AS the_data
FROM data d
JOIN labels l1 ON l1.id = d.value1
JOIN labels l2 ON l2.id = d.value2
JOIN labels l3 ON l3.id = d.value3
;
Note: labels.id -> labels.name is a functional dependency (id is the primary key), but that doesn't mean that you need a function. The query just acts like a function.
You can pass the label table name as string, construct a query as string and execute it:
sql = `select name from ` || reference_table_name || `where id = ` || ref_id;
EXECUTE sql INTO name;
RETURN name;
currently I try to make a history table based on postgresql jsonb, currently as a example I have two table's:
CREATE TABLE data (id BIGSERIAL PRIMARY KEY, price NUMERIC(10,4) NOT NULL, article TEXT NOT NULL, quantity BIGINT NOT NULL, lose BIGINT NOT NULL, username TEXT NOT NULL);
CREATE TABLE data_history (id BIGSERIAL PRIMARY KEY, data JSONB NOT NULL, username TEXT NOT NULL);
The history table act's a simple history (the username there could be avoided).
I populate the data of the history with a trigger:
CREATE OR REPLACE FUNCTION insert_history() RETURNS TRIGGER AS $$
BEGIN
INSERT INTO data_history (data, username) VALUES (row_to_json(NEW.*), NEW.username);
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
Now I try to populate the history back to the data table:
SELECT jsonb_populate_record(NULL::data, data) FROM data_history;
However the result will now be a tuple and not a table:
jsonb_populate_record
-------------------------------------
(1,45.4500,0A45477,100,1,c.schmitt)
(2,5.4500,0A45477,100,1,c.schmitt)
(2 rows)
Is there any way to get the data back as the table data back? I know there is jsonb_populate_recordset, too, however it doesn't accept a query?!
jsonb_populate_record() returns a row-type (or record-type), so if you use it in the SELECT cluase, you'll get a single column, which is a row-type.
To avoid this, use it in the FROM clause instead (with an implicit LATERAL JOIN):
SELECT r.*
FROM data_history,
jsonb_populate_record(NULL::data, data) r
Technically, the statement below could work too
-- DO NOT use, just for illustration
SELECT jsonb_populate_record(NULL::data, data).*
FROM data_history
but it will call jsonb_populate_record() for each column in data (as a result of an engine limitation).