Identity column pair - tsql

In SQL Server 2008 R2, is it possible to declare a (grouping, identity) pair where the identity is individually counted for each grouping value?
For example, in a table called Invoice I have these columns:
(year INT, invoiceNo IDENTITY (group by year) )
This way I could enforce a unique constraint on year/invoiceNo automatically by accounting year (required by law).

Related

Which Datatype should we use to store rupees and paisa?

I am building an E-commerce platform and, while creating an invoice and saving to data base, it is not accurate; there is difference of 3-4 rupees in the final amount.
Which Datatype should we use to store rupees and paisa to avoid any inaccuracy?
It is always suggested to use Decimal datatype for storing monetary values. In PostgreSQL, Numeric datatype is equivalent to Decimal and hence it can be used.
Consider below given example for better understanding:-
CREATE TABLE IF NOT EXISTS products (
id serial PRIMARY KEY,
name VARCHAR NOT NULL,
price NUMERIC (5, 2) // the second parameter is the precision for decimal values
);
INSERT INTO products (name, price) VALUES
('Phone', 100.21),
('Tablet', 300.49);

Can a foreign key be created to point into a non unique column in a date range versioned table in PostgreSQL?

I would like to do two things in PostgresSQL:
version rows in a table by a date range
ensure the integrity of the table by setting up single column foreign keys
For me, it seems I can do only one of the above at the same time.
Detailed example:
I need to version a content of a table based on the date range (so in any particular point in time there is only one row for the (customId, validFrom, validUntil) unique index (there are no overlapping ranges). but it's important that none of those columns are unique by themselves.
By using this method I can query my table and get the valid entity for any point in time, but I could not figure out how to link this table via the customId key to another table so the integrity of the table is guarded.
The problem is that the customId key is not unique as there can be more than one of the same key when multiple ranges are recorded.
One solution I have used before is creating an another x_history table when I am only interested in the latest state of the entity, and copy the old state to the history table every time, but this time, this wouldn't work really well because I would constantly query two table as it's "random" what version of data I am interested in during selects.
Example by data:
table a:
id (PK)
custom_id (unique in any single point of time via the above composite unique index)
valid_from (timestamp, storing the start of the validity of a)
valid_until (timestamp, storing the end of the validity of a)
table b:
id (PK)
a__custom_id (unique in any single point of time)
valid_from (timestamp, storing the start of the validity of b)
valid_until (timestamp, storing the end of the validity of b)
I would like to insert only those rows into table b which
b.a__custom_id exists in a.custom_id
b.a_custom_id, b.valid_from, b.valid_until is unique
You cannot easily have both foreign keys and historical data.
One way would be to have the validity range as part of the primary key, but then you have to update many rows whenever you modify an entry in the referenced table.
I think you can get away with a history table if you include the currently active version in the history table. Then you can just query the history table, and the table with the current values is just there for foreign keys.
The history table would have an exclusion constraint over the primary key and the time range.

oracle how to change the next autogenerated value of the identity column

I've created table projects like so:
CREATE TABLE projects (
project_id NUMBER(10,0) GENERATED BY DEFAULT ON NULL AS IDENTITY ,
project_name VARCHAR2(75 CHAR) NOT NULL
Then I've inserted ~150,000 rows while importing data from my old MySQL table. the MySQL had existing id numbers which i need to preserve so I added the id number to the SQL during the insert. Now when I insert new rows into the oracle table, the id is a very low number. Can you tell me how to reset my counter on the project_id column to start at 150,001 so not to mess up any of my existing id numbers? essentially i need the oracle version of:
ALTER TABLE tbl AUTO_INCREMENT = 150001;
Edit: Oracle 12c now supports the identity data type, allowing an auto number primary key that does not require us to create a sequence + insert trigger.
SOLUTION:
after some creative google search terms I was able to find this thread on the oracle docs site. here is the solution for changing the identity's nextval:
ALTER TABLE projects MODIFY project_id GENERATED BY DEFAULT ON NULL AS IDENTITY ( START WITH 150000);
Here is the solution that i found on this oracle thread:. The concept is to alter your identity column rather than adjust the sequence. Actually, the sequences that are automatically created aren't editable or drop-able.
ALTER TABLE projects MODIFY project_id GENERATED BY DEFAULT ON NULL AS IDENTITY ( START WITH 150000);
According to this source, you can do it like this:
ALTER TABLE projects MODIFY project_id
GENERATED BY DEFAULT ON NULL AS IDENTITY (START WITH LIMIT VALUE);
The START WITH LIMIT VALUE clause can only be specified with an ALTER TABLE statement (and by implication against an existing identity column). When this clause is specified, the table will be scanned for the highest value in the PROJECT_ID column and the sequence will commence at this value + 1.
The same is also stated in the oracle thread referenced in OP's own answer:
START WITH LIMIT VALUE, which is specific to identity_options, can only be used with ALTER TABLE MODIFY. If you specify START WITH LIMIT VALUE, then Oracle Database locks the table and finds the maximum identity column value in the table (for increasing sequences) or the minimum identity column value (for decreasing sequences) and assigns the value as the sequence generator's high water mark. The next value returned by the sequence generator will be the high water mark + INCREMENT BY integer for increasing sequences, or the high water mark - INCREMENT BY integer for decreasing sequences.
The following statement creates the sequence customers_seq in the sample schema oe. This sequence could be used to provide customer ID numbers when rows are added to the customers table.
CREATE SEQUENCE customers_seq
START WITH 1000
INCREMENT BY 1
NOCACHE
NOCYCLE;
The first reference to customers_seq.nextval returns 1000. The second returns 1001. Each subsequent reference will return a value 1 greater than the previous reference.
http://docs.oracle.com/cd/B12037_01/server.101/b10759/statements_6014.htm

postgreSQL table design

I need to create a table (postgresql 9.1) and I am stuck. Could you possibly help?
The incoming data can assume either of the two formats:
client id(int), shop id(int), asof(date), quantity
client id(int), , asof(date), quantity
The given incoming CSV template is: {client id, shop id, shop type, shop genre, asof, quantity}
In the first case, the key is -- client id, shop id, asof
In the second case, the key is -- client id, shop type, shop genre, asof
I tried something like:
create table(
client_id int references...,
shop_id int references...,
shop_type int references...,
shop_genre varchar(30),
asof date,
quantity real,
primary key( client_id, shop_id, shop_type, shop_genre, asof )
);
But then I ran into a problem. When data is of format 1, the inserts fail because of nulls in pk.
The queries within a client can be either by shop id, or by a combination of shop type and genre. There are no use cases of partial or regex matches on genre.
What would be a suitable design? Must I split this into 2 tables and then take a union of search results? Or, is it customary to put 0's and blanks for missing values and move along?
If it matters, the table is expected to be 100-500 million rows once all historic data is loaded.
Thanks.
You could try partial unique indexes aka filtered unique index aka conditional unique indexes.
http://www.postgresql.org/docs/9.2/static/indexes-partial.html
Basically what it comes down to is the uniqueness is filtered based on a where clause,
For example(Of course test for correctness and impact on performance):
CREATE TABLE client(
pk_id SERIAL,
client_id int,
shop_id int,
shop_type int,
shop_genre varchar(30),
asof date,
quantity real,
PRIMARY KEY (pk_id)
);
CREATE UNIQUE INDEX uidx1_client
ON client
USING btree
(client_id, shop_id, asof, quantity)
WHERE client_id = 200;
CREATE UNIQUE INDEX uidx2_client
ON client
USING btree
(client_id, asof, quantity)
WHERE client_id = 500;
A simple solution would be to create a field for the primary key which would use one of two algorithms to generate its data depending on what is passed in.
If you wanted a fully normalised solution, you would probably need to split the shop information into two separate tables and have it referenced from this table using outer joins.
You may also be able to use table inheritance available in postgres.

PostgreSQL sequence that ensure a unique ID

Having a table with a columnn ID as primary key, and a column MyNumber that contains integers defined by the sequence myUniqueSequence. I would like to define myUniqueSequence in PostgreSQL that will return the next free and unique number for the column MyNumber.
This means, the next time a new row is created programatically will start by number 1, if it's free it will use it for the column myNumber, if not, it tries with 2 and so on.
Use the serial data type for your column (instead of your own sequence):
http://www.postgresql.org/docs/9.0/static/datatype-numeric.html#DATATYPE-SERIAL