POSTGRESQL:autoincrement for varchar type field - postgresql

I'm switching from MongoDB to PostgreSQL and was wondering how I can implement the same concept as used in MongoDB for uniquely identifying each raws by MongoId.
After migration, the already existing unique fields in our database is saved as character type. I am looking for minimum source code changes.
So if any way exist in postgresql for generating auto increment unique Id for each inserting into table.

The closest thing to MongoDB's ObjectId in PostgreSQL is the uuid type. Note that ObjectId has only 12 bytes, while UUIDs have 128 bits (16 bytes).
You can convert your existsing IDs by appending (or prepending) f.ex. '00000000' to them.
alter table some_table
alter id_column
type uuid
using (id_column || '00000000')::uuid;
Although it would be the best if you can do this while migrating the schema + data. If you can't do it during the migration, you need to update you IDs (while they are still varchars: this way the referenced columns will propagate the change), drop foreign keys, do the alter type and then re-apply foreign keys.
You can generate various UUIDs (for default values of the column) with the uuid-ossp module.
create extension "uuid-ossp";
alter table some_table
alter id_column
set default uuid_generate_v4();

Use a sequence as a default for the column:
create sequence some_id_sequence
start with 100000
owned by some_table.id_column;
The start with should be bigger then your current maximum number.
Then use that sequence as a default for your column:
alter table some_table
alter id_column set default nextval('some_id_sequence')::text;
The better solution would be to change the column to an integer column. Storing numbers in a text (or varchar) column is a really bad idea.

Related

Remove "identity flag" from a column in PostgreSQL

I have some tables in PostgreSQL 12.9 that were declared as something like
-- This table is written in old style
create table old_style_table_1 (
id bigserial not null primary key,
...
);
-- This table uses new feature
create table new_style_table_2 (
id bigint generated by default as identity,
...
);
Second table seems to be declared using the identity flag introduced in 10th version.
Time went by, and we have partitioned the old tables, while keeping the original sequences:
CREATE TABLE partitioned_old_style_table_1 (LIKE old_style_table_1 INCLUDING DEFAULTS) PARTITION BY HASH (user_id);
CREATE TABLE partitioned_new_style_table_2 (LIKE new_style_table_2 INCLUDING DEFAULTS) PARTITION BY HASH (user_id);
DDL for their id columns seems to be id bigint default nextval('old_style_table_1_id_seq') not null and id bigint default nextval('new_style_table_2_id_seq') not null.
Everything has worked fine so far. Partitioned tables proved to be a great boon and we decided to retire the old tables by dropping them.
DROP TABLE old_style_table_1, new_style_table_2;
-- [2BP01] ERROR: cannot drop desired object(s) because other objects depend on them
-- Detail: default value for column id of table old_style_table_1 depends on sequence old_style_table_1_id_seq
-- default value for column id of table new_style_table_2 depends on sequence new_style_table_2_id_seq
After some pondering I've found out that sequences may have owners in postgres, so I opted to change them:
ALTER SEQUENCE old_style_table_1_id_seq OWNED BY partitioned_old_style_table_1.id;
DROP TABLE old_style_table_1;
-- Worked out flawlessly
ALTER SEQUENCE new_style_table_2_id_seq OWNED BY partitioned_new_style_table_2.id;
ALTER SEQUENCE new_style_table_2_id_seq OWNED BY NONE;
-- Here's the culprit of the question:
-- [0A000] ERROR: cannot change ownership of identity sequence
So, apparently the fact that this column has pg_attribute.attidentity set to 'd' forbids me from:
• changing the default value of the column:
ALTER TABLE new_style_table_2 ALTER COLUMN id SET DEFAULT 0;
-- [42601] ERROR: column "id" of relation "new_style_table_2" is an identity column
• dropping the default value:
ALTER TABLE new_style_table_2 ALTER COLUMN id DROP DEFAULT;
-- [42601] ERROR: column "id" of relation "new_style_table_2" is an identity column
-- Hint: Use ALTER TABLE ... ALTER COLUMN ... DROP IDENTITY instead.
• dropping the identity, column or the table altogether (new tables already depend on the sequence):
ALTER TABLE new_style_table_2 ALTER COLUMN id DROP IDENTITY IF EXISTS;
-- or
ALTER TABLE new_style_table_2 DROP COLUMN id;
-- or
DROP TABLE new_style_table_2;
-- result in
-- [2BP01] ERROR: cannot drop desired object(s) because other objects depend on them
-- default value for column id of table partitioned_new_style_table_2 depends on sequence new_style_table_2_id_seq
I've looked up the documentation, it provides the way to SET IDENTITY or ADD IDENTITY, but no way to remove it or to change to a throwaway sequence without attempting to drop the existing one.
➥ So, how am I able to remove an identity flag from the column-sequence pair so it won't affect other tables that use this sequence?
UPD: Tried running UPDATE pg_attribute SET attidentity='' WHERE attrelid=16816; on localhost, still receive [2BP01] and [0A000]. :/
Though I managed to execute the DROP DEFAULT value bit, but it seems like a dead end.
I don't think there is a safe and supported way to do that (without catalog modifications). Fortunately, there is nothing special about sequences that would make dropping them a problem. So take a short down time and:
remove the default value that uses the identity sequence
record the current value of the sequence
drop the table
create a new sequence with an appropriate START value
use the new sequence to set new default values
If you want an identity column, you should define it on the partitioned table, not on one of the partitions.

Alter a Column from INTEGER To BIGINT

In my database I have several fields with INTEGER Type. I need to change some of them to BIGINT.
So my question is, can I just use the following command?
ALTER TABLE MyTable ALTER COLUMN MyIntegerColumn TYPE BIGINT;
Are the contained data be converted the correct way? After the convert is this column a "real" BIGINT column?
I know this is not possible if there are constraints on this column (Trigger, ForeingKey,...). But if there are no constraints is it possible to do it this way?
Or is it better to convert it by a Help-Column:
MyIntegerColumn -> MyIntegerColumnBac -> MyBigIntColumn
When you execute
ALTER TABLE MyTable ALTER COLUMN MyIntegerColumn TYPE BIGINT;
Firebird will not convert existing data from INTEGER to BIGINT, instead it will create a new format version for the table.
When inserting new rows or updating existing rows, the value will be stored as a BIGINT, but when reading Firebird will convert 'old' rows on the fly from INTEGER to BIGINT. This happens transparently for you as the user. This is to prevent needing to rewrite all existing rows, which could be costly (IO, garbage collection of old versions of rows, etc).
So please, do use ALTER TABLE .. ALTER COLUMN, do not do MyIntegerColumn -> MyIntegerColumnBac -> MyBigIntColumn. There are some exceptions to this rule, eg (potentially) lossy character set transformations are better done that way to prevent transliterations errors on select if a character does not exist in the new character set, or changing a (var)char column to be shorter (which can't be done with alter column).
To be a little more specific: when a row is written in the database it contains a format version (aka version count) of that row. The format version points to a description of a row (datatypes, etc) how Firebird should read that row. An alter table will create a new format version, and that format will be applied when writing new rows or updating existing rows. When reading an old row, Firebird will apply necessary transformation to present that row as the new format (for example adding new columns with their default values, transforming a data type of a column).
These format versions are also a reason why the number of alter tables are restricted: if you apply more than 255 alter tables on a single table you must backup and restore the database (the format version is a single byte) before further changes are allowed to that table.

how to update the data type of a column without deleting the values in Postgresql?

I made a mistake by the creation of my table. The primary key was incorrect. I delete the constraint and now I don't have a primary key in my table, only the field with the data. Now I want to set again this field as auto_increment primary key without losing my data. How I can do this?
I tryed this:
ALTER TABLE name_table ADD COLUMN name_column serial primary key;
But with this I am losing my data and creating a new column, that I don't want
try this
ALTER TABLE table_name ADD CONSTRAINT some_name primary key (name_column);
For my suggestion,
backup your database first in sql or csv or xml or excel something
restore-able.
Then alter your table structure, column data type, from UI or command
Then if data recorded on your table are lost or gone, restore your
backup data only, (not the structure of table)
After that you have changed column data type and also get your required data. I hope it will work.
Hi guys I was trying several ways and I found this one and maybe also somebody later can use:
Create a sequenz: Sequenz is the way that Postgresql implement to generate auto_increment fields. Ones we have a auto_increment is also a primary key. Should not be like this, is not a rule, but in most of the cases a auto_increment field is a primary key.
To create a sequenz is like this:
CREATE SEQUENCE exemplo_id_seq
INCREMENT 1 --the increment upgrate will be made 1 + 1
MINVALUE 1
MAXVALUE
START 1 --the start counting is in 1
CACHE 1;
After this is only to give this sequenz to the affected field using NEXTVAL, like this:
ALTER TABLE table_name ALTER COLUMN id SET DEFAULT NEXTVAL("exemplo_id_seq"::regclass);
Is working good without losing the data from old errors

oracle how to change the next autogenerated value of the identity column

I've created table projects like so:
CREATE TABLE projects (
project_id NUMBER(10,0) GENERATED BY DEFAULT ON NULL AS IDENTITY ,
project_name VARCHAR2(75 CHAR) NOT NULL
Then I've inserted ~150,000 rows while importing data from my old MySQL table. the MySQL had existing id numbers which i need to preserve so I added the id number to the SQL during the insert. Now when I insert new rows into the oracle table, the id is a very low number. Can you tell me how to reset my counter on the project_id column to start at 150,001 so not to mess up any of my existing id numbers? essentially i need the oracle version of:
ALTER TABLE tbl AUTO_INCREMENT = 150001;
Edit: Oracle 12c now supports the identity data type, allowing an auto number primary key that does not require us to create a sequence + insert trigger.
SOLUTION:
after some creative google search terms I was able to find this thread on the oracle docs site. here is the solution for changing the identity's nextval:
ALTER TABLE projects MODIFY project_id GENERATED BY DEFAULT ON NULL AS IDENTITY ( START WITH 150000);
Here is the solution that i found on this oracle thread:. The concept is to alter your identity column rather than adjust the sequence. Actually, the sequences that are automatically created aren't editable or drop-able.
ALTER TABLE projects MODIFY project_id GENERATED BY DEFAULT ON NULL AS IDENTITY ( START WITH 150000);
According to this source, you can do it like this:
ALTER TABLE projects MODIFY project_id
GENERATED BY DEFAULT ON NULL AS IDENTITY (START WITH LIMIT VALUE);
The START WITH LIMIT VALUE clause can only be specified with an ALTER TABLE statement (and by implication against an existing identity column). When this clause is specified, the table will be scanned for the highest value in the PROJECT_ID column and the sequence will commence at this value + 1.
The same is also stated in the oracle thread referenced in OP's own answer:
START WITH LIMIT VALUE, which is specific to identity_options, can only be used with ALTER TABLE MODIFY. If you specify START WITH LIMIT VALUE, then Oracle Database locks the table and finds the maximum identity column value in the table (for increasing sequences) or the minimum identity column value (for decreasing sequences) and assigns the value as the sequence generator's high water mark. The next value returned by the sequence generator will be the high water mark + INCREMENT BY integer for increasing sequences, or the high water mark - INCREMENT BY integer for decreasing sequences.
The following statement creates the sequence customers_seq in the sample schema oe. This sequence could be used to provide customer ID numbers when rows are added to the customers table.
CREATE SEQUENCE customers_seq
START WITH 1000
INCREMENT BY 1
NOCACHE
NOCYCLE;
The first reference to customers_seq.nextval returns 1000. The second returns 1001. Each subsequent reference will return a value 1 greater than the previous reference.
http://docs.oracle.com/cd/B12037_01/server.101/b10759/statements_6014.htm

get index from postgresql sequence using liquibase

What attribute of column I should use in order to get index value from postgresql sequence? valueNumeric? valueComputed?
As far as I understand the value of attribute should be nextval( 'simple_id_seq' ).
In postgresql sequence values are created as INTEGER or BIGINT.
Often this was done by using SERIAL or BIGSERIAL as column type ... but will indirectly create a sequencer of int or bigint and set the default value of the column to nextval(sequencer).
In a resultset of table data the column contains int or bigint.
Normaly there is no need to use nextval(sequencer) ... it fills the column on INSERT automatically (in the INSERT statemant the column shoult not appear).
Refer to http://www.postgresql.org/docs/9.3/static/datatype-numeric.html
If you do not want to use SERIAL or BIGSERIAL as suggested by #double_word_distruptor, use valueComputed.
With valueComputed you are telling Liquibase you are passing a function like nextval('simple_id_seq') and it will not try to parse it as a number or do any quoting.
You may also be able to use valueSequenceNext="simple_id_seq" to gain a little cross-database compatibility.