postgresql serial pk reverts to integer after restore - postgresql

I built a db with serial type for the pks, I migrated to another server and the pk columns are now integer and as a result I cannot add new data due to the not null restriction of a pk. Is there any Alter command which can fix this?

SERIAL is not a data type in postgresql just a convenience word when creating tables that makes the column an integer type and adds auto-incrementing. All you have to do is add back auto-incrementing (a sequence) to the column and make sure its next value is greater than anything in the table.
This question covers adding serial to an existing column
This answer explains how to reset the counter

Related

How to efficiently add an auto-incrementing primary key to an existing column in postgres? [duplicate]

This question already has answers here:
PostgreSQL, reconfigure existing table, changing primary key to type=serial
(1 answer)
How to convert primary key from integer to serial?
(1 answer)
Closed 3 years ago.
Problem
I can add an auto-incrementing primary key to a pre-existing column in an empty table in postgres, but I wonder if it can be done more efficiently.
What I've Done
Before the table gets populated, I need to alter a column to add an auto-incrementing primary key. Similar to the answer to this question, the following will work (assuming the table is named test and the column in question is named col1):
ALTER TABLE test ADD PRIMARY KEY (col1);
CREATE SEQUENCE seq OWNED BY test.col1;
ALTER TABLE test ALTER COLUMN col1 SET DEFAULT nextval('seq');
UPDATE test SET col1 = nextval('seq');
Four lines is far from the end of the world. However, as per that answer, this can be done in one line if we're adding a column rather than altering a pre-existing one:
ALTER TABLE test ADD COLUMN col1 SERIAL PRIMARY KEY;
Question
Is there a way to do that in one line, but for a pre-existing column? It seems like SERIAL is limited to when one adds a new column, but I figured it can't hurt to ask. My naive attempts included things like:
ALTER TABLE test ALTER COLUMN col1 SERIAL PRIMARY KEY;
ALTER TABLE test ADD SERIAL PRIMARY KEY (col1);
Thanks!
EDIT: This got marked as a duplicate right off the bat, though I read both of those questions coming into this. I feel like they both use the same methodology that I'm already using (unless I misunderstood what was at play), and my question is about seeing if there's a more efficient way to do it, especially since there is in new column creation.

Does PostgreSQL create an internal key (probably an int type) as primary key for a table without a primary key specified?

From https://stackoverflow.com/a/40597571/3284469
If you don't specify a primary key, RDBMS will help you choose an unique and non-null key, OR create an internal key (probably an int type) as primary key for this table.
Could you give some examples for the "OR" case, where a RDBMS (PostgreSQL in particular, and possibly also MySQL or SQL Server) create an "internal key (probably an int type) as primary key" for a table without a primary key specified?
Does PostgreSQL have something similar to MySQL?
Thanks.
for Postgres:
From "5.4. System Columns":
oid
The object identifier (object ID) of a row. This column is only present if the table was created using WITH OIDS, or if the default_with_oids configuration variable was set at the time. This column is of type oid (same name as the column); see Section 8.18 for more information about the type.
and
ctid
The physical location of the row version within its table. Note that although the ctid can be used to locate the row version very quickly, a row's ctid will change if it is updated or moved by VACUUM FULL. Therefore `ctid is useless as a long-term row identifier. The OID, or even better a user-defined serial number, should be used to identify logical rows.
Both come close to what you're searching for but have restrictions as you can read in the documentation. So, as the manual states, using a user-defined PK is the better choice.
for SQL Server:
There is the undocumented pseudo column %%physloc%%. It describes the physical location of a row. That, however, might be subject to change if the row gets physically moved for whatever reason. And it's undocumented, that is, its behavior might change any time between releases or even just patches or it might be removed completely without further notice. So using a user-defined PK is the better choice here either.

POSTGRESQL:autoincrement for varchar type field

I'm switching from MongoDB to PostgreSQL and was wondering how I can implement the same concept as used in MongoDB for uniquely identifying each raws by MongoId.
After migration, the already existing unique fields in our database is saved as character type. I am looking for minimum source code changes.
So if any way exist in postgresql for generating auto increment unique Id for each inserting into table.
The closest thing to MongoDB's ObjectId in PostgreSQL is the uuid type. Note that ObjectId has only 12 bytes, while UUIDs have 128 bits (16 bytes).
You can convert your existsing IDs by appending (or prepending) f.ex. '00000000' to them.
alter table some_table
alter id_column
type uuid
using (id_column || '00000000')::uuid;
Although it would be the best if you can do this while migrating the schema + data. If you can't do it during the migration, you need to update you IDs (while they are still varchars: this way the referenced columns will propagate the change), drop foreign keys, do the alter type and then re-apply foreign keys.
You can generate various UUIDs (for default values of the column) with the uuid-ossp module.
create extension "uuid-ossp";
alter table some_table
alter id_column
set default uuid_generate_v4();
Use a sequence as a default for the column:
create sequence some_id_sequence
start with 100000
owned by some_table.id_column;
The start with should be bigger then your current maximum number.
Then use that sequence as a default for your column:
alter table some_table
alter id_column set default nextval('some_id_sequence')::text;
The better solution would be to change the column to an integer column. Storing numbers in a text (or varchar) column is a really bad idea.

oracle how to change the next autogenerated value of the identity column

I've created table projects like so:
CREATE TABLE projects (
project_id NUMBER(10,0) GENERATED BY DEFAULT ON NULL AS IDENTITY ,
project_name VARCHAR2(75 CHAR) NOT NULL
Then I've inserted ~150,000 rows while importing data from my old MySQL table. the MySQL had existing id numbers which i need to preserve so I added the id number to the SQL during the insert. Now when I insert new rows into the oracle table, the id is a very low number. Can you tell me how to reset my counter on the project_id column to start at 150,001 so not to mess up any of my existing id numbers? essentially i need the oracle version of:
ALTER TABLE tbl AUTO_INCREMENT = 150001;
Edit: Oracle 12c now supports the identity data type, allowing an auto number primary key that does not require us to create a sequence + insert trigger.
SOLUTION:
after some creative google search terms I was able to find this thread on the oracle docs site. here is the solution for changing the identity's nextval:
ALTER TABLE projects MODIFY project_id GENERATED BY DEFAULT ON NULL AS IDENTITY ( START WITH 150000);
Here is the solution that i found on this oracle thread:. The concept is to alter your identity column rather than adjust the sequence. Actually, the sequences that are automatically created aren't editable or drop-able.
ALTER TABLE projects MODIFY project_id GENERATED BY DEFAULT ON NULL AS IDENTITY ( START WITH 150000);
According to this source, you can do it like this:
ALTER TABLE projects MODIFY project_id
GENERATED BY DEFAULT ON NULL AS IDENTITY (START WITH LIMIT VALUE);
The START WITH LIMIT VALUE clause can only be specified with an ALTER TABLE statement (and by implication against an existing identity column). When this clause is specified, the table will be scanned for the highest value in the PROJECT_ID column and the sequence will commence at this value + 1.
The same is also stated in the oracle thread referenced in OP's own answer:
START WITH LIMIT VALUE, which is specific to identity_options, can only be used with ALTER TABLE MODIFY. If you specify START WITH LIMIT VALUE, then Oracle Database locks the table and finds the maximum identity column value in the table (for increasing sequences) or the minimum identity column value (for decreasing sequences) and assigns the value as the sequence generator's high water mark. The next value returned by the sequence generator will be the high water mark + INCREMENT BY integer for increasing sequences, or the high water mark - INCREMENT BY integer for decreasing sequences.
The following statement creates the sequence customers_seq in the sample schema oe. This sequence could be used to provide customer ID numbers when rows are added to the customers table.
CREATE SEQUENCE customers_seq
START WITH 1000
INCREMENT BY 1
NOCACHE
NOCYCLE;
The first reference to customers_seq.nextval returns 1000. The second returns 1001. Each subsequent reference will return a value 1 greater than the previous reference.
http://docs.oracle.com/cd/B12037_01/server.101/b10759/statements_6014.htm

get index from postgresql sequence using liquibase

What attribute of column I should use in order to get index value from postgresql sequence? valueNumeric? valueComputed?
As far as I understand the value of attribute should be nextval( 'simple_id_seq' ).
In postgresql sequence values are created as INTEGER or BIGINT.
Often this was done by using SERIAL or BIGSERIAL as column type ... but will indirectly create a sequencer of int or bigint and set the default value of the column to nextval(sequencer).
In a resultset of table data the column contains int or bigint.
Normaly there is no need to use nextval(sequencer) ... it fills the column on INSERT automatically (in the INSERT statemant the column shoult not appear).
Refer to http://www.postgresql.org/docs/9.3/static/datatype-numeric.html
If you do not want to use SERIAL or BIGSERIAL as suggested by #double_word_distruptor, use valueComputed.
With valueComputed you are telling Liquibase you are passing a function like nextval('simple_id_seq') and it will not try to parse it as a number or do any quoting.
You may also be able to use valueSequenceNext="simple_id_seq" to gain a little cross-database compatibility.