Where to set Maximum Id of Primary Key in Oracle SQL Developer?
I have seeded some data Production DB to Lower Environment. But In production Maximum ID is more than million. So I want set Million as the Current maximum ID of the Primary Key in one of the Lower Environment table. Now it is failing with Maximum ID is already existing.
I have seeded some data Production DB to Lower Environment. But In production Maximum ID is more than million. So I want set Million as the Current maximum ID of the Primary Key in one of the Lower Environment table. Now it is failing with Maximum ID is already existing.
Nowhere, as far as I can tell. It is you who should pay attention to what you're inserting into target environment, either
when "exporting" data from production, or
when "importing" data into lower environment (whatever that means)
You didn't explain how you did that - if you used select statement to create a CSV file, you'd e.g.
select ...
from production_Table
where id <= 1e6 -- restrict IDs to at most 1 million
or - while inserting data into the target - do
insert into lower_environment_table
select ... from ...
where id <= 1e6
It would be even easier if databases are in the same network so that you could create a database link and directly copy data from one environment to another.
If you used Data Pump export and import utilities, you could apply the where clause so that .dmp file already contains rows you want which would then simplifiy import.
On the other hand, if you don't care about possible duplicates (which are prevented by primary key constraint anyway, as it won't allow them), insert would simply fail and you'd ignore all those errors.
Basically, the final "answer" depends on the way you're performing that process.
Related
Can I do row-specific update / delete operations in a DB2 table Via SQL, in a NON QUNIQUE Primary Key Context?
The Table is a PHYSICAL FILE on the NATIVE SYSTEM of the AS/400.
It was, like many other Files, created without the unique definition, which leads DB2 to the conclusion, that The Table, or PF has no qunique Key.
And that's my problem. I can't override the structure of the table to insert a unique ID ROW, because, I would have to recompile ALL my correlating Programs on the AS/400, which is a serious issue, much things would not work anymore, "perhaps". Of course, I can do that refactoring for one table, but our system has thousands of those native FILES, some well done with Unique Key, some without Unique definition...
Well, I work most of the time with db2 and sql on that old files. And all files which have a UNIQUE Key are no problem for me to do those important update / delete operations.
Is there some way to get an additional column to every select with a very unique row id, respective row number. And in addition, what is much more important, how can I update this RowNumber.
I did some research and meanwhile I assume, that there is no chance to do exact alterations or deletes, when there is no unique key present. What I would wish would be some additional ID-ROW which is always been sent with the table, which I can Refer to when I do my update / delete operations. Perhaps my thinking here has an fallacy as non Unique Key Tables are purposed to be edited in other ways.
Try the RRN function.
SELECT RRN(EMPLOYEE), LASTNAME
FROM EMPLOYEE
WHERE ...;
UPDATE EMPLOYEE
SET ...
WHERE RRN(EMPLOYEE) = ...;
From https://stackoverflow.com/a/40597571/3284469
If you don't specify a primary key, RDBMS will help you choose an unique and non-null key, OR create an internal key (probably an int type) as primary key for this table.
Could you give some examples for the "OR" case, where a RDBMS (PostgreSQL in particular, and possibly also MySQL or SQL Server) create an "internal key (probably an int type) as primary key" for a table without a primary key specified?
Does PostgreSQL have something similar to MySQL?
Thanks.
for Postgres:
From "5.4. System Columns":
oid
The object identifier (object ID) of a row. This column is only present if the table was created using WITH OIDS, or if the default_with_oids configuration variable was set at the time. This column is of type oid (same name as the column); see Section 8.18 for more information about the type.
and
ctid
The physical location of the row version within its table. Note that although the ctid can be used to locate the row version very quickly, a row's ctid will change if it is updated or moved by VACUUM FULL. Therefore `ctid is useless as a long-term row identifier. The OID, or even better a user-defined serial number, should be used to identify logical rows.
Both come close to what you're searching for but have restrictions as you can read in the documentation. So, as the manual states, using a user-defined PK is the better choice.
for SQL Server:
There is the undocumented pseudo column %%physloc%%. It describes the physical location of a row. That, however, might be subject to change if the row gets physically moved for whatever reason. And it's undocumented, that is, its behavior might change any time between releases or even just patches or it might be removed completely without further notice. So using a user-defined PK is the better choice here either.
Am inserting huge number of records containing Phone Number and Status (a million) in a postgresql database from hibernate. I am reading the records from a file, processing each, and inserting them one at a time. But before the insert I need to check whether this combination of phone number and status already exists in the table.
It seems to me that the fastest way would be to do a query and limit it by 1, or an Exists query, but another suggestion I got from a colleague is to add a unique constraint on the table on the phone number and status fields and in case the unique key rule is being violated, just catch the exception in hibernate.
Any thoughts on what's the fastest and most reliable method?
It depends whether there are only these columns, or also some others, for example date. If you don't care which record will stay in database (for example you need latest combination of number, status and date), then create unique constraint and recover from exception that is thrown when inserting duplicities.
You may also insert all with duplicities but with some primary key (id) and then delete all duplicities but one you like to have (group by...), then create unique constraint.
Last option depends on size of records - if it is only 1M of them, you may filter them in application layer and then save them.
All of these depend on how much duplicities are there, if just few, use option one, if each record may be there 10 times, maybe last option is the best (depending on RAM, but you will hold only currently best record for phone and status)
I need to migrate a DDL from Postgres to DB2, but I need that it works the same as in Postgres. There is a table that generates values from a sequence, but the values can also be explicitly given.
Postgres
create sequence hist_id_seq;
create table benchmarksql.history (
hist_id integer not null default nextval('hist_id_seq') primary key,
h_c_id integer,
h_c_d_id integer,
h_c_w_id integer,
h_d_id integer,
h_w_id integer,
h_date timestamp,
h_amount decimal(6,2),
h_data varchar(24)
);
(Look at the sequence call in the hist_id column to define the value of the primary key)
The business logic inserts into the table by explicitly providing an ID, and in other cases, it leaves the database to choose the number.
If I change this in DB2 to a GENERATED ALWAYS it will throw errors because there are some provided values. On the other side, if I create the table with GENERATED BY DEFAULT, DB2 will throw an error when trying to insert with the same value (SQL0803N), because the "internal sequence" does not take into account the already inserted values, and it does not retry with a next value.
And, I do not want to restart the sequence each time a provided ID was inserted.
This is the problem in BenchmarkSQL when trying to port it to DB2: https://sourceforge.net/projects/benchmarksql/ (File sqlTableCreates)
How can I implement the same database logic in DB2 as it does in Postgres (and apparently in Oracle)?
You're operating under a misconception: that sources external to the db get to dictate its internal keys. Ideally/conceptually, autogenerated ids will never need to be seen outside of the db, as conceptually there should be unique natural keys for export or reporting. Still, there are times when applications will need to manage some ids, often when setting up related entities (eg, JPA seems to want to work this way).
However, if you add an id value that you generated from a different source, the db won't be able to manage it. How could it? It's not efficient - for one thing, attempting to do so would do one of the following
Be unsafe in the face of multiple clients (attempt to add duplicate keys)
Serialize access to the table (for a potentially slow query, too)
(This usually shows up when people attempt something like: SELECT MAX(id) + 1, which would require locking the entire table for thread safety, likely including statements that don't even touch that column. If you try to find any "first-unused" id - trying to fill gaps - this gets more complicated and problematic)
Neither is ideal, so it's best to not have the problem in the first place. This is usually done by having id columns be autogenerated, but (as pointed out earlier) there are situations where we may need to know what the id will be before we insert the row into the table. Fortunately, there's a standard SQL object for this, SEQUENCE. This provides a db-managed, thread-safe, fast way to get ids. It appears that in PostgreSQL you can use sequences in the DEFAULT clause for a column, but DB2 doesn't allow it. If you don't want to specify an id every time (it should be autogenerated some of the time), you'll need another way; this is the perfect time to use a BEFORE INSERT trigger;
CREATE TRIGGER Add_Generated_Id NO CASCADE BEFORE INSERT ON benchmarksql.history
NEW AS Incoming_Entity
FOR EACH ROW
WHEN Incoming_Entity.id IS NULL
SET id = NEXTVAL FOR hist_id_seq
(something like this - not tested. You didn't specify where in the project this would belong)
So, if you then add a row with something like:
INSERT INTO benchmarksql.history (hist_id, h_data) VALUES(null, 'a')
or
INSERT INTO benchmarksql.history (h_data) VALUES('a')
an id will be generated and attached automatically. Note that ALL ids added to the table must come from the given sequence (as #mustaccio pointed out, this appears to be true even in PostgreSQL), or any UNIQUE CONSTRAINT on the column will start throwing duplicate-key errors. So any time your application needs an id before inserting a row in the table, you'll need some form of
SELECT NEXT VALUE FOR hist_id_seq
FROM sysibm.sysdummy1
... and that's it, pretty much. This is completely thread and concurrency safe, will not maintain/require long-term locks, nor require serialized access to the table.
I am trying to store UUID value into my table using PostgreSQL 9.3 version.
Example:
create table test
(
uno UUID,
name text,
address text
);
insert into test values(1,'abc','xyz');
Note: How to store integer value into UUID type?
The whole point of UUIDs is that they are automatically generated because the algorithm used virtually guarantees that they are unique in your table, your database, or even across databases. UUIDs are stored as 16-byte datums so you really only want to use them when one or more of the following holds true:
You need to store data indefinitely, forever, always.
When you have a highly distributed system of data generation (e.g. INSERTS in a single "system" which is distributed over multiple machines each having their own local database) where central ID generation is not feasible (e.g. a mobile data collection system with limited connectivity, later uploading to a central server).
Certain scenarios in load-balancing, replication, etc.
If one of these cases applies to you, then you are best off using the UUID as a primary key and have it generated automagically:
CREATE TABLE test (
uno uuid PRIMARY KEY DEFAULT uuid_generate_v4(),
name text,
address text
);
Like this you never have to worry about the UUIDs themselves, it is all done behind the scenes. Your INSERT statement would become:
INSERT INTO test (name, address) VALUES ('abc','xyz') RETURNING uno;
With the RETURNING clause obviously optional if you want to use that to reference related data.
It's not allowed to simply cast an integer into a UUID type. Generally, UUIDs are generated either internally in Postgres (see http://www.postgresql.org/docs/9.3/static/uuid-ossp.html for more detail on ways to do this) or via a client program.
If you just want unique IDs for your table itself, but don't actually need UUIDs (those are geared toward universal uniqueness, across tables and servers, etc.), you can use the serial type, which creates an implicit sequence for you which automatically increments when you INSERT into a table.