I have to change the value of a determined column in postgre but I never worked before with maskbits, (it's a legacy code). The current value is 1 but it's stored as a integer maskbit so if I try to do
update table set field=0 where ...
It doesn't work and it keeps returning me 1. So I read the documentation and tried to do a AND:
update table set field=field&0 where ...
But it doesn't worked too. Then I tried to cover all the 30 bits, without success too:
update table set field=field&0000000000000000000000000000000 where ...
Can someone please show me how to properly change the value of a integer maskbit on postgresql?
EDIT: I found this case here in StackOverflow
UPDATE users SET permission = permission & ~16
which seen's to be the closest to mine so I tried to do
UPDATE table SET field = field & ~1
because that's the only bit I have to deactivate but it still active and retuning 1 when I do a SELECT
PostgreSQL may be pickier than you're used to when it comes to combining numbers of different types. You can't just use a 0, you need to use a bitstring constant of the right length, or explicitly cast an integer to the proper bit(n) type.
Bitstring constants are entered like B'00101', which would have type bit(5).
Some examples you can type into psql:
select 5;
select (5::bit(5));
select (30::bit(5));
select ~(1::bit(5));
select ~B'00001';
select (B'00101' & B'11110');
select (B'00101' & ~(1::bit(5)));
Related
I need to set limit to the values of the column. For example, I have a column as TEST_COLUMN with INTEGER data type. I need to set the condition that the column should only accepts the values between
1-4.(it should not be more than 4 and less than 1) Is it possible in postgress?
Thanks in Advance.
I may handle it in the code level but is there is a way to do it in Database level.
You almost certainly don't mean "postgresql 9.4" in your tags - that version is VERY old.
What you are after is a "CHECK constraint".
https://www.postgresql.org/docs/15/ddl-constraints.html
You use it something like this:
CREATE TABLE mytable (
...
my_column int NOT NULL CHECK (my_column BETWEEN 1 AND 4)
...
)
For PostgreSQL at least, the check shouldn't rely on running a query, just accessing columns from the row you are inserting or updating.
I want to add a sequence to a column that might already have data, so I'm trying to start it beyond whatever's already there. Assuming there already is data, I would like to have done it this way:
CREATE SEQUENCE my_sequence MINVALUE 1000000 START
(SELECT MAX(id_column) FROM my_table) OWNED BY my_table.id_column;
but it keeps dying at ( claiming syntax error. It's like the start value has to be cold hard numbers--nothing symbolic.
Of course, an even better solution would be if the sequence could be intelligent enough to avoid duplicate values, since id_column has a unique constraint on it--that's why I'm doing this. But from what I can tell, that's not possible.
I also tried skipping the START and then doing:
ALTER SEQUENCE my_sequence RESTART WITH (SELECT max(id_column)+1 FROM my_table);
but, again, it doesn't seem like to symbolic start values.
I'm running PostgreSQL 9.4 but some of our customers are using stuff as primitive as 8.3.
You can't specify a dynamic value for the start value.
But you can set the value once the sequence is created:
CREATE SEQUENCE my_sequence MINVALUE 1000000 OWNED BY my_table.id_column;
select setval('my_sequence', (SELECT MAX(id_column) FROM my_table));
You can restore you sequence by request:
select setval('my_sequence', (SELECT MAX(id_column) FROM my_table));
Applicable for Postgres 9.2.
Just because I was struggling with a slight variation in use case to the accepted answers here and experiencing an error telling me that setval did not exist, thought I'd share in case others had the same.
I needed to set the value of my sequence to the max value in an id column, but I also wanted to combine that with a default starting value if there were no rows.
To do this I used coalesce combined with max like this:
select setval('sequence', cast((select coalesce(max(id),1) from table) as bigint));
The catch here was using cast with the select, without that, you get an error something like:
ERROR: function setval(unknown, double precision) does not exist
LINE 1: select setval('sequence', (select coalesce(MAX(i...
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
Does PostgreSQL support computed / calculated columns, like MS SQL Server? I can't find anything in the docs, but as this feature is included in many other DBMSs I thought I might be missing something.
Eg: http://msdn.microsoft.com/en-us/library/ms191250.aspx
Postgres 12 or newer
STORED generated columns are introduced with Postgres 12 - as defined in the SQL standard and implemented by some RDBMS including DB2, MySQL, and Oracle. Or the similar "computed columns" of SQL Server.
Trivial example:
CREATE TABLE tbl (
int1 int
, int2 int
, product bigint GENERATED ALWAYS AS (int1 * int2) STORED
);
fiddle
VIRTUAL generated columns may come with one of the next iterations. (Not in Postgres 15, yet).
Related:
Attribute notation for function call gives error
Postgres 11 or older
Up to Postgres 11 "generated columns" are not supported.
You can emulate VIRTUAL generated columns with a function using attribute notation (tbl.col) that looks and works much like a virtual generated column. That's a bit of a syntax oddity which exists in Postgres for historic reasons and happens to fit the case. This related answer has code examples:
Store common query as column?
The expression (looking like a column) is not included in a SELECT * FROM tbl, though. You always have to list it explicitly.
Can also be supported with a matching expression index - provided the function is IMMUTABLE. Like:
CREATE FUNCTION col(tbl) ... AS ... -- your computed expression here
CREATE INDEX ON tbl(col(tbl));
Alternatives
Alternatively, you can implement similar functionality with a VIEW, optionally coupled with expression indexes. Then SELECT * can include the generated column.
"Persisted" (STORED) computed columns can be implemented with triggers in a functionally equivalent way.
Materialized views are a related concept, implemented since Postgres 9.3.
In earlier versions one can manage MVs manually.
YES you can!! The solution should be easy, safe, and performant...
I'm new to postgresql, but it seems you can create computed columns by using an expression index, paired with a view (the view is optional, but makes makes life a bit easier).
Suppose my computation is md5(some_string_field), then I create the index as:
CREATE INDEX some_string_field_md5_index ON some_table(MD5(some_string_field));
Now, any queries that act on MD5(some_string_field) will use the index rather than computing it from scratch. For example:
SELECT MAX(some_field) FROM some_table GROUP BY MD5(some_string_field);
You can check this with explain.
However at this point you are relying on users of the table knowing exactly how to construct the column. To make life easier, you can create a VIEW onto an augmented version of the original table, adding in the computed value as a new column:
CREATE VIEW some_table_augmented AS
SELECT *, MD5(some_string_field) as some_string_field_md5 from some_table;
Now any queries using some_table_augmented will be able to use some_string_field_md5 without worrying about how it works..they just get good performance. The view doesn't copy any data from the original table, so it is good memory-wise as well as performance-wise. Note however that you can't update/insert into a view, only into the source table, but if you really want, I believe you can redirect inserts and updates to the source table using rules (I could be wrong on that last point as I've never tried it myself).
Edit: it seems if the query involves competing indices, the planner engine may sometimes not use the expression-index at all. The choice seems to be data dependant.
One way to do this is with a trigger!
CREATE TABLE computed(
one SERIAL,
two INT NOT NULL
);
CREATE OR REPLACE FUNCTION computed_two_trg()
RETURNS trigger
LANGUAGE plpgsql
SECURITY DEFINER
AS $BODY$
BEGIN
NEW.two = NEW.one * 2;
RETURN NEW;
END
$BODY$;
CREATE TRIGGER computed_500
BEFORE INSERT OR UPDATE
ON computed
FOR EACH ROW
EXECUTE PROCEDURE computed_two_trg();
The trigger is fired before the row is updated or inserted. It changes the field that we want to compute of NEW record and then it returns that record.
PostgreSQL 12 supports generated columns:
PostgreSQL 12 Beta 1 Released!
Generated Columns
PostgreSQL 12 allows the creation of generated columns that compute their values with an expression using the contents of other columns. This feature provides stored generated columns, which are computed on inserts and updates and are saved on disk. Virtual generated columns, which are computed only when a column is read as part of a query, are not implemented yet.
Generated Columns
A generated column is a special column that is always computed from other columns. Thus, it is for columns what a view is for tables.
CREATE TABLE people (
...,
height_cm numeric,
height_in numeric GENERATED ALWAYS AS (height_cm * 2.54) STORED
);
db<>fiddle demo
Well, not sure if this is what You mean but Posgres normally support "dummy" ETL syntax.
I created one empty column in table and then needed to fill it by calculated records depending on values in row.
UPDATE table01
SET column03 = column01*column02; /*e.g. for multiplication of 2 values*/
It is so dummy I suspect it is not what You are looking for.
Obviously it is not dynamic, you run it once. But no obstacle to get it into trigger.
Example on creating an empty virtual column
,(SELECT *
From (values (''))
A("virtual_col"))
Example on creating two virtual columns with values
SELECT *
From (values (45,'Completed')
, (1,'In Progress')
, (1,'Waiting')
, (1,'Loading')
) A("Count","Status")
order by "Count" desc
I have a code that works and use the term calculated, I'm not on postgresSQL pure tho we run on PADB
here is how it's used
create table some_table as
select category,
txn_type,
indiv_id,
accum_trip_flag,
max(first_true_origin) as true_origin,
max(first_true_dest ) as true_destination,
max(id) as id,
count(id) as tkts_cnt,
(case when calculated tkts_cnt=1 then 1 else 0 end) as one_way
from some_rando_table
group by 1,2,3,4 ;
A lightweight solution with Check constraint:
CREATE TABLE example (
discriminator INTEGER DEFAULT 0 NOT NULL CHECK (discriminator = 0)
);
Does PostgreSQL support computed / calculated columns, like MS SQL Server? I can't find anything in the docs, but as this feature is included in many other DBMSs I thought I might be missing something.
Eg: http://msdn.microsoft.com/en-us/library/ms191250.aspx
Postgres 12 or newer
STORED generated columns are introduced with Postgres 12 - as defined in the SQL standard and implemented by some RDBMS including DB2, MySQL, and Oracle. Or the similar "computed columns" of SQL Server.
Trivial example:
CREATE TABLE tbl (
int1 int
, int2 int
, product bigint GENERATED ALWAYS AS (int1 * int2) STORED
);
fiddle
VIRTUAL generated columns may come with one of the next iterations. (Not in Postgres 15, yet).
Related:
Attribute notation for function call gives error
Postgres 11 or older
Up to Postgres 11 "generated columns" are not supported.
You can emulate VIRTUAL generated columns with a function using attribute notation (tbl.col) that looks and works much like a virtual generated column. That's a bit of a syntax oddity which exists in Postgres for historic reasons and happens to fit the case. This related answer has code examples:
Store common query as column?
The expression (looking like a column) is not included in a SELECT * FROM tbl, though. You always have to list it explicitly.
Can also be supported with a matching expression index - provided the function is IMMUTABLE. Like:
CREATE FUNCTION col(tbl) ... AS ... -- your computed expression here
CREATE INDEX ON tbl(col(tbl));
Alternatives
Alternatively, you can implement similar functionality with a VIEW, optionally coupled with expression indexes. Then SELECT * can include the generated column.
"Persisted" (STORED) computed columns can be implemented with triggers in a functionally equivalent way.
Materialized views are a related concept, implemented since Postgres 9.3.
In earlier versions one can manage MVs manually.
YES you can!! The solution should be easy, safe, and performant...
I'm new to postgresql, but it seems you can create computed columns by using an expression index, paired with a view (the view is optional, but makes makes life a bit easier).
Suppose my computation is md5(some_string_field), then I create the index as:
CREATE INDEX some_string_field_md5_index ON some_table(MD5(some_string_field));
Now, any queries that act on MD5(some_string_field) will use the index rather than computing it from scratch. For example:
SELECT MAX(some_field) FROM some_table GROUP BY MD5(some_string_field);
You can check this with explain.
However at this point you are relying on users of the table knowing exactly how to construct the column. To make life easier, you can create a VIEW onto an augmented version of the original table, adding in the computed value as a new column:
CREATE VIEW some_table_augmented AS
SELECT *, MD5(some_string_field) as some_string_field_md5 from some_table;
Now any queries using some_table_augmented will be able to use some_string_field_md5 without worrying about how it works..they just get good performance. The view doesn't copy any data from the original table, so it is good memory-wise as well as performance-wise. Note however that you can't update/insert into a view, only into the source table, but if you really want, I believe you can redirect inserts and updates to the source table using rules (I could be wrong on that last point as I've never tried it myself).
Edit: it seems if the query involves competing indices, the planner engine may sometimes not use the expression-index at all. The choice seems to be data dependant.
One way to do this is with a trigger!
CREATE TABLE computed(
one SERIAL,
two INT NOT NULL
);
CREATE OR REPLACE FUNCTION computed_two_trg()
RETURNS trigger
LANGUAGE plpgsql
SECURITY DEFINER
AS $BODY$
BEGIN
NEW.two = NEW.one * 2;
RETURN NEW;
END
$BODY$;
CREATE TRIGGER computed_500
BEFORE INSERT OR UPDATE
ON computed
FOR EACH ROW
EXECUTE PROCEDURE computed_two_trg();
The trigger is fired before the row is updated or inserted. It changes the field that we want to compute of NEW record and then it returns that record.
PostgreSQL 12 supports generated columns:
PostgreSQL 12 Beta 1 Released!
Generated Columns
PostgreSQL 12 allows the creation of generated columns that compute their values with an expression using the contents of other columns. This feature provides stored generated columns, which are computed on inserts and updates and are saved on disk. Virtual generated columns, which are computed only when a column is read as part of a query, are not implemented yet.
Generated Columns
A generated column is a special column that is always computed from other columns. Thus, it is for columns what a view is for tables.
CREATE TABLE people (
...,
height_cm numeric,
height_in numeric GENERATED ALWAYS AS (height_cm * 2.54) STORED
);
db<>fiddle demo
Well, not sure if this is what You mean but Posgres normally support "dummy" ETL syntax.
I created one empty column in table and then needed to fill it by calculated records depending on values in row.
UPDATE table01
SET column03 = column01*column02; /*e.g. for multiplication of 2 values*/
It is so dummy I suspect it is not what You are looking for.
Obviously it is not dynamic, you run it once. But no obstacle to get it into trigger.
Example on creating an empty virtual column
,(SELECT *
From (values (''))
A("virtual_col"))
Example on creating two virtual columns with values
SELECT *
From (values (45,'Completed')
, (1,'In Progress')
, (1,'Waiting')
, (1,'Loading')
) A("Count","Status")
order by "Count" desc
I have a code that works and use the term calculated, I'm not on postgresSQL pure tho we run on PADB
here is how it's used
create table some_table as
select category,
txn_type,
indiv_id,
accum_trip_flag,
max(first_true_origin) as true_origin,
max(first_true_dest ) as true_destination,
max(id) as id,
count(id) as tkts_cnt,
(case when calculated tkts_cnt=1 then 1 else 0 end) as one_way
from some_rando_table
group by 1,2,3,4 ;
A lightweight solution with Check constraint:
CREATE TABLE example (
discriminator INTEGER DEFAULT 0 NOT NULL CHECK (discriminator = 0)
);
I have a simple question, suppose we have a table:
id A B
1 Jon Doe
2 Foo Bar
Is there a way to know, which is the next id's increment, in this case 3 ?
Database is PostgreSQL!
Tnx alot!
If you want to claim an ID and return it, you can use nextval(), which advances the sequence without inserting any data.
Note that if this is a SERIAL column, you need to find the sequence's name based on the table and column name, as follows:
Select nextval(pg_get_serial_sequence('my_table', 'id')) as new_id;
There is no cast-iron guarantee that you'll see these IDs come back in order (the sequence generates them in order, but multiple sessions can claim an ID and not use it yet, or roll back an INSERT and the ID will not be reused) but there is a guarantee that they will be unique, which is normally the important thing.
If you do this often without actually using the ID, you will eventually use up all the possible values of a 32-bit integer column (i.e. reach the maximum representable integer), but if you use it only when there's a high chance you will actually be inserting a row with that ID it should be OK.
To get the current value of a sequence without affecting it or needing a previous insert in the same session, you can use;
SELECT last_value FROM tablename_fieldname_seq;
An SQLfiddle to test with.
Of course, getting the current value will not guarantee that the next value you'll get is actually last_value + 1 if there are other simultaneous sessions doing inserts, since another session may have taken the serial value before you.
SELECT currval('names_id_seq') + 1;
See the docs
However, of course, there's no guarantee that it's going to be your next value. What if another client grabs it before you? You can though reserve one of the next values for yourself, selecting a nextval from the sequence.
I'm new so here's the process I use having little to no prior knowledge of how Postgres/SQL work:
Find the sequence for your table using pg_get_serial_sequence()
SELECT pg_get_serial_sequence('person','id');
This should output something like public.person_id_seq. person_id_seq is the sequence for your table.
Plug the sequence from (1) into nextval()
SELECT nextval('person_id_seq');
This will output an integer value which will be the next id added to the table.
You can turn this into a single command as mentioned in the accepted answer above
SELECT nextval(pg_get_serial_sequence('person','id'));
If you notice that the sequence is returning unexpected values, you can set the current value of the sequence using setval()
SELECT setval(pg_get_serial_sequence('person','id'),1000);
In this example, the next call to nextval() will return 1001.