No cache in sequence - oracle10g

Could you please help me in the below scenerio
In our application we are using the sequence with no cache. Even though sequence number is not generating with the order. It is resulting with gap in the sequence genearation.Below is the sequence definition.
SEQUENCE_OWNER SEQUENCE_NAME MIN_VALUE MAX_VALUE INCREMENT_BY C O CACHE_SIZE LAST_NUMBER
DBOWNER SEQUENCENAME 1 1.00000000000000E+27 1 N N 0 145095
How could we achieve sequence number without gap.
Thanks,
Gajendra

Sequence-generated numbers are not designed to be gap-free.
For example, use of a sequence number followed by a rollback will not rollback the use of the sequence number.
If you really do need a gap free number then you'll have to sacrifice concurrency by implementing a locking mechanism while you generate a new number and commit the new row.
Alternatively, if you need high-concurrency, you can leave the value blank and fill it in asynchronously with a batch process.

Related

T-SQL set a rotating Flag (True/False) in records using Stored Proc

I can do this using multiple commands in C# for the app I'm creating, but prefer a stored proc to eliminate issues with latency/locks, etc. (hopefully):
I have a table of 10 extensions (important fields):
SortOrder, Extension, IsUsed
First record will be set to IsUsed = true
When calling the stored proc, I need the IsUsed of the NEXT record in sort order to be set to true,the current record that is true set to false. When I hit the last record, rotate back to the first record.
Use Case: I need to rotate through a bank of usable numbers. Multiple people use the app, so cannot reuse. a number within the last 4 minutes (Bank of 10 will suffice, but we can extend if necessary). When the user requests a number, they get the next avail. I can build the table however needed, so any and all options to achieve use case are welcome.
I need to set the flag to true on the 1st record when stored proc is called. All other records should be false.
I have seen this, which is of interest, but doesn't quite answer:
Get "next" row from SQL Server database and flag it in single transaction
If all that you're using this for is to return a number to identify a session, I'd suggest scrapping the whole table idea and letting SQL Server do the work for you.
You can create a SEQUENCE object that will cycle and return the next value for you, without needing to write any code or maintain any tables.
CREATE SEQUENCE dbo.Extension
AS integer
START WITH 5
INCREMENT BY 5
MINVALUE 5
MAXVALUE 50
CYCLE;
This will return the number 5 the first time it's called, up to the number 50 on call number 10, and then start back over. You can adjust the numbers in the code to more or less do whatever you would like, though.
Get the next value like this:
SELECT NEXT VALUE FOR dbo.Extension;
And when/if you need to extend the range:
ALTER SEQUENCE dbo.Extension
MAXVALUE 100;
Play around with the idea on the Rextester demo.
Edit: In light of the comments above and below, I'd still stick with a SEQUENCE, I think.
Every time your code calls the table for an extension, use a query along the lines of this:
SELECT
Extension
FROM
ExtTable
WHERE
SortOrder = NEXT VALUE FOR dbo.Extension;
Functionally, this should do what you're after, again with no code to write or maintain.

reasons behind decreasing of "last value" of a SEQUENCE automatically in postgresql?

I am using a "sequence" with start value = 1 and increment = 1 and suppose i insert 10 elements in the table then "last_value" goes to 10. now if i delete 10th element then "last_value" still points to 10. now my question is, is there any possibility that value of "last_value" may decrease to 9 by killing postgres or by taking dump and then again restoring the database (or any other case).
In my case it happened (i don't know how).please provide possible reasons for this.
There are only 2 cases, when a sequence is "decreased":
Sequence reached its MAXVALUE and is reset to MINVALUE . This can be controled with [ NO ] CYCLE in CREATE SEQUENCE
Sequence is explicitly reset with setval()
There are no other ways to get the same value from a sequence 2 times.
You can set the value of a sequence.
SELECT setval('public.sequence_name', 9, true);
However, if you need the values to be contiguous, I'm not sure that a sequence is the most appropriate way to go about that. Perhaps using rank()?

Consecutive numbers - SQL Server 2008 R2

I need to put numbers in sequential order (1, 2, 3, 4...) in a table column.
These are invoice numbers, they need to be consecutives, no gaps, no repeated. Is a mandatory requirement, in fact is a fiscal requirement, so I can't skip it.
My current aproach is to use a second "Numbers" table with 2 columns (Id, LastNumber) with one record on it, Id = 1.
This is what I'm doing now:
1. SELECT (LastNumber + 1) as Number from Numbers with (xlock, rowlock) where Id = 1
2. assign the number, do other inserts and updates to other tables.
3. UPDATE Numbers set LastNumber = #Number where Id = 1 --#Number is the number retrieved in step 1
4. End of Transaction
I'm trying to use locking, and don't know if I am correct, what is the most efficient solution to do this?
As I stated above, is a mandatory requeriment, the number must be consecutives, no gaps, no repeated.
The table is used in an app from several clients in a network. I use Sql Server 2008 R2.
I have found this similar question:
Sequential Invoice Numbers SQL server
the solution given there is with some T-SQL code that get the max number in the table, but what happens if 2 clients called this code at the same time? 2 repetead numbers will be generated?
Thanks
You'll want a transaction. By doing the update first, you're effectively locking the row from other shared locks until your transaction is completed, then returning the new number.
BEGIN TRAN
UPDATE Numbers SET Number = Number + 1 WHERE Id = 12
SELECT Number FROM Numbers WHERE Id = 12
COMMIT

Query distinct values from historical database

If I run this query on large Historical database without specifying a date, will KDB be smart enough to retrive status values from index and not bring database down?
select distinct status from trades
The only way kdb can possibly tell all the distinct status is by reading from every partition. Yes this will take a lot of memory but unless you yourself want to maintain a cache of all distinct status, there is nothing else you can do. As previous mentioned an attribute will speed the query up but the query time will still only scale with the number of partitions.
To retrieve using index, kdb provides 'g#' attribute. Distinct alone can take more time which depends on size of your table(it will be linear search without `g# attribute).
Check this-> http://code.kx.com/q4m3/8_Tables/#88-attributes
Let's look at simple example:
q) a: 10000000#1 2 3 5
q) b:`g#a
q) \ts distinct a
68 134217888
q) \ts distinct b
0 288
Difference shows that g# attribute makes a lot of difference in time and space taken during searching. It is becauseg# attribute creates and maintains index on vector.

What is the right way to iterate through a kdb partitioned table in an client application?

I want to process all the rows of a kdb table in an R program (I use qserver.R). One way to do this is to initialize a memory handler and then iterate through all the rows one of the time, as explained here:
t: select from mytable where ts>12:30:00,ts<15:00:00,price,msg="A"
t[0]
t[1]
t[2]
...
I want to limit the number of client/server calls in R to loop as fast as possible.
How can I fetch multiple rows for each call?
NOTE: my answer below assumes that mytable is the partioned database, but that you now have t in memory.
another option using cut (using "chunks" of 1,000,000 as per your earlier post)
(`int$1e6) cut t
now you have a list of table "chunks" of your desired size and you can use accordingly.
I frequently use this for certain functions (particularly in combination with peach).
A pattern I've found useful is:
f:`function that does something useful on chunks`
fa:`function that reaggregates up to final results`
r:fa raze f peach (`int$`size`)cut t
if you're t is really large (both vertical/horizontal) you might want to avoid cut directly on the table for memory reasons, but can instead cut a list of indices for the table into the appropriate size and then feed the indices to your f and have that index to the t and grab what you want.
Below a quick comparison of both approaches (note that f here is pointless, but just to prove the point of the cut on t versus indices)
q)t:flip (`$"c",/:string til 100)!{(`int$1e7)?100} each til 100
q)\ts a:raze {select c1,c99 from x}each 1000 cut t
3827 4108103072j
q)\ts b:raze {select c1,c99 from t[x]}each 1000 cut til count t
3057 217623200j
q)4108103072j%217623200j
18.87714
q)a~b
1b
From your previous questions I assume this is a 1 person system so what benefit are you getting from kdb? Why not work fully in R and just use flat memory mapped files directly there? Avoiding unneeded complexity and overhead. If all you want to do is stream the data through R in order that should be simple.
Rather than "ts>12:30:00,ts<15:00:00" use "ts within (12:30:00;15:00:00)" it's quicker.
The larger the size of chunks you process in the more efficient it is likely to be. 100 seems quite small.
Regards,
Ryan Hamilton
Sorted out, this returns 100 rows each time:
\l /data/mydb
t: select from mytable where ts>12:30:00,ts<15:00:00,price,msg="A"
select [0 100] from t
select [100 100] from t
select [200 100] from t
..