How can I get max value of specific field in Table? - iphone

As my Question says that how can i get maximum value from Table?
In my apps. I have table name dataset_master
And table has field name is dataset_id, it is add manually as auto_inc.
So, First time when no record is inserted in Table and when I insert first record then I add dataset_id is 1. (this is only first time)
And Then after insert next record for dataset_id i fire query for get max value of dataset_id and I insert dataset_id +1. (This is for next record and so on..)
In my case I use following Query for get maximum dataset_id.
SELECT MAX(dataset_id) FROM dataset_master where project_id = 1
Here in my application I want to get maximum value of field name is dataset_id from dataset_master table.
This Query properly work when I insert record to dataset_master table each time I get proper maximum number of dataset_id. But when I delete record in sequins such like (1 to 5 from 10) in table and after I insert new record then I got each time last maximum number such like
if my table has 10 record then my dataset_id is 1 to 10;
When I delete record such like 1 to 5 then remains 6 to 10 record and also dataset_id in Table.
And then after I insert new record then each time I got 10 (maximum Number) so each time new record has dataset_id is dataset_id + 1 so 11.
What is problem I don't know (may be mistake in Query ?), please give your suggestion.

You need to reset the sequence in the sqlite_sequence table. I'd advise you not to worry about this though, as by the time it becomes a problem, this will be the least of your headaches.

I think the problem is not in your query, but in your insert. Do you force dataset_id when inserting new rows?

Related

oracle stop select duplicated value

I am trying to insert data in my table from another which has two column (employee number) and (branch),
and whenever new data is inserted the employee last number value is increased,
my code is working fine but if the there are more than one employee inserted at the same time they will have duplicated value.
for example, if I inserted the data with branch number is 100 the employee will have number 101, and if the branch number is 200 the employee will have number 201 etc.
but if data inserted for two employees both have same branch for example number 200 both of them will have number 201, but I want the first one to have 201 and the second one to have 202,
I hope you get what I mean, any help will be appreciated.
here is my code:
insert into emp_table_1
Emp_Name_1,
Emp_Branch_1,
Emp_number_1
Select Emp_Name_2 ,
Emp_Branch_2,
Case emp_branch
When '100' Then (Select Max(Emp_number_1)+1 From emp_table_1 Where Branch_Cd=100)
When '200' Then (Select Max(Emp_number_1)+1 From emp_table_1 Where Branch_Cd=200)
End As Emp_number_2
From emp_table_2
Don't try to have sequential numbers for each branch and don't try to use MAX to find the next number in the sequence.
Use a sequence (that is what they are designed for).
CREATE SEQUENCE employee_id__seq;
Then you can use:
insert into emp_table_1 (Emp_Name_1, Emp_Branch_1, Emp_number_1)
Select Emp_Name_2 ,
Emp_Branch_2,
employee_id__seq.NEXTVAL
From emp_table_2
Then each employee will have a unique number (which you can use as a primary key) and you will not get concurrency issues if multiple people try to create new users at the same time.
Or, from Oracle 12, you could use an identity column in your table:
CREATE TABLE emp_table_1(
emp_name_1 VARCHAR2(200),
emp_branch_1 CONSTRAINT emp_table_1__branch__fk REFERENCES branch_table (branch_id),
emp_number_1 NUMBER(8,0)
GENERATED ALWAYS AS IDENTITY
CONSTRAINT emp_table_1__number__pk PRIMARY KEY
);
Then your query is simply:
insert into emp_table_1 (Emp_Name_1, Emp_Branch_1)
Select Emp_Name_2 ,
Emp_Branch_2
From emp_table_2
And the identity column will be auto-generated.

How can I sum/subtract time values from same row

I want to sum and subtract two or more timestamp columns.
I'm using PostgreSQL and I have a structure as you can see:
I can't round the minutes or seconds, so I'm trying to extract the EPOCH and doing the operation after, but I always get an error because the first EXTRACT recognizes the column, but when I put the second EXTRACT in the same SQL command I get an error message saying that the second column does not exist.
I'll give you an example:
SELECT
EXAMPLE.PERSON_ID,
COALESCE(EXTRACT(EPOCH from EXAMPLE.LEFT_AT),0) +
COALESCE(EXTRACT(EPOCH from EXAMPLE.ARRIVED_AT),0) AS CREDIT
FROM
EXAMPLE
WHERE
EXAMPLE.PERSON_ID = 1;
In this example I would get an error like:
Column ARRIVED_AT does not exist
Why is this happening?
Could I sum/subtract time values from same row?
Is ARRIVED_AT a calculated value instead of a column? What did you run to get the query results image you posted showing those columns?
The following script does what you expect, so there's something about the structure of the table you're querying that isn't what you expect.
CREATE SCHEMA so46801016;
SET search_path=so46801016;
CREATE TABLE trips (
person_id serial primary key,
arrived_at time,
left_at time
);
INSERT INTO trips (arrived_at, left_at) VALUES
('14:30'::time, '19:30'::time)
, ('11:27'::time, '20:00'::time)
;
SELECT
t.person_id,
COALESCE(EXTRACT(EPOCH from t.left_at),0) +
COALESCE(EXTRACT(EPOCH from t.arrived_at),0) AS credit
FROM
trips t;
DROP SCHEMA so46801016 CASCADE;

Capture RowVersion on Insert

My tables have a RowVersion column called LastChanged.
ID | LastChanged | Foo |
I am developing some sync related functionality. I will be selecting all records from the table between a min and max RowVersion. The initial sync won't have a Min Row Version so I will be including all rows upto MIN_ACTIVE_ROWVERSION().
Subsequent syncs will have a min RowVersion - typically it will be the MIN_ACTIVE_ROWVERSION() from the previous sync.
Selecting rows that are between the Min and Max RowVersion like this is easy. However I would also like to determine, which of those rows, are Inserts and which rows are Updates. The easiest way for me to do this, is to add another column:
ID | LastChanged (RowVersion) | CreationRowVersion (Binary(8)) | Foo |
For CreationRowVersion - The idea is to capture the RowVersion value on insert. That value will then never change for the row. So I would like to default CreationRowVersion to the same value as RowVersion when the row is initially Inserted.
With this in place, I should then be able to determine which rows have been created, and which rows have been updated since the last sync (i.e between min and max RowVersions) - because for created rows, I can look at rows that have a CreationRowVersion that fall within the min and max row version range. For Updated Rows, I can look at rows that have a LastChanged that fall within min and max row version range - but I can also exclude rows from being detected as "Updates" if their CreationRowVersion also falls between min and max RowVersions as then I know they are actually already included as Inserts.
So now that the background is out of the way, it brings me to the crux of my question. What is the most efficient way to default CreationRowVersion to the RowVersion on Insert? Can this be done with a default constrain on the column, or does it have to be done via a trigger? I'd like this column to be a Binary(8) as this matches the datatype of RowVersion.
Thanks
Try using the MIN_ACTIVE_ROWVERSION() function as the default value for your CreationRowVersion BINARY(8) column.
CREATE TABLE dbo.RowVerTest (
ID INT IDENTITY,
LastChanged ROWVERSION,
CreationRowVersion BINARY(8)
CONSTRAINT DF_RowVerTest_CreationRowVersion DEFAULT(MIN_ACTIVE_ROWVERSION()),
Foo VARCHAR(256)
)
GO
INSERT INTO dbo.RowVerTest (Foo) VALUES ('Hello');
GO
--[LastChanged] and [CreationRowVersion] should be equal.
SELECT * FROM dbo.RowVerTest;
GO
UPDATE dbo.RowVerTest SET Foo = 'World' WHERE ID = 1;
GO
--[LastChanged] should be incremented, while [CreationRowVersion]
--should retain its original value from the insert.
SELECT * FROM dbo.RowVerTest;
GO
CAUTION: in my testing, the above only works when rows are inserted one at a time. The code for the scenario below does not appear to work for your use case:
--Insert multiple records with a single INSERT statement.
INSERT INTO dbo.RowVerTest (Foo)
SELECT TOP(5) name FROM sys.objects;
--All the new rows have the same value for [CreationRowVersion] :{
SELECT * FROM dbo.RowVerTest;
There is an existing question about referencing columns in a default statement. You can't do it, but there are other suggestions to look at, including an AFTER INSERT trigger.
You may want to take a look at this question on RowVersion and Performance.

PostgreSQL auto-increment increases on each update

Every time I do an INSERT or UPSERT (ON CONFLICT UPDATE), the increments column on each table increments by the number of updates that came before it.
For instance, if I have this table:
id int4
title text
description text
updated_at timestamp
created_at timestamp
And then run these queries:
INSERT INTO notifications (title, description) VALUES ('something', 'whatever'); // Generates increments ID=1
UPDATE notifications title='something else' WHERE id = 1; // Repeat this query 20 times with different values.
INSERT INTO notifications (title, description) VALUES ('something more', 'whatever again'); // Generates increments ID=22
This is a pretty big issue. The script we are running processes 100,000+ notifications every day. This can create gaps between each insert on the order of 10,000, so we might start off with 100 rows but by the time we reach 1,000 rows we have an auto-incremented primary key ID value over 100000 for that last row.
We will quickly run out of auto-increment values on our tables if this continues.
Is our PostgreSQL server misconfigured? Using Postgres 9.5.3.
I'm using Eloquent Schema Builder (e.g. $table->increments('id')) to create the table and I don't know if that has something to do with it.
A sequence will be incremented whenever an insertion is attempted regardless of its success. A simple update (as in your example) will not increment it but an insert on conflict update will since the insert is tried before the update.
One solution is to change the id to bigint. Another is not to use a sequence and manage it yourself. And another is to do a manual upsert:
with s as (
select id
from notifications
where title = 'something'
), i as (
insert into notifications (title, description)
select 'something', 'whatever'
where not exists (select 1 from s)
)
update notifications
set title = 'something else'
where id = (select id from s)
This supposes title is unique.
You can reset auto increment column to max inserted value by run this command before insert command:
SELECT setval('notifications_id_seq', MAX(id)) FROM notifications;

SQLite - a smart way to remove and add new objects

I have a table in my database and I want for each row in my table to have an unique id and to have the rows named sequently.
For example: I have 10 rows, each has an id - starting from 0, ending at 9. When I remove a row from a table, lets say - row number 5, there occurs a "hole". And afterwards I add more data, but the "hole" is still there.
It is important for me to know exact number of rows and to have at every row data in order to access my table arbitrarily.
There is a way in sqlite to do it? Or do I have to manually manage removing and adding of data?
Thank you in advance,
Ilya.
It may be worth considering whether you really want to do this. Primary keys usually should not change through the lifetime of the row, and you can always find the total number of rows by running:
SELECT COUNT(*) FROM table_name;
That said, the following trigger should "roll down" every ID number whenever a delete creates a hole:
CREATE TRIGGER sequentialize_ids AFTER DELETE ON table_name FOR EACH ROW
BEGIN
UPDATE table_name SET id=id-1 WHERE id > OLD.id;
END;
I tested this on a sample database and it appears to work as advertised. If you have the following table:
id name
1 First
2 Second
3 Third
4 Fourth
And delete where id=2, afterwards the table will be:
id name
1 First
2 Third
3 Fourth
This trigger can take a long time and has very poor scaling properties (it takes longer for each row you delete and each remaining row in the table). On my computer, deleting 15 rows at the beginning of a 1000 row table took 0.26 seconds, but this will certainly be longer on an iPhone.
I strongly suggest that you re-think your design. In my opinion your asking yourself for troubles in the future (e.g. if you create another table and want to have some relations between the tables).
If you want to know the number of rows just use:
SELECT count(*) FROM table_name;
If you want to access rows in the order of id, just define this field using PRIMARY KEY constraint:
CREATE TABLE test (
id INTEGER PRIMARY KEY,
...
);
and get rows using ORDER BY clause with ASC or DESC:
SELECT * FROM table_name ORDER BY id ASC;
Sqlite creates an index for the primary key field, so this query is fast.
I think that you would be interested in reading about LIMIT and OFFSET clauses.
The best source of information is the SQLite documentation.
If you don't want to take Stephen Jennings's very clever but performance-killing approach, just query a little differently. Instead of:
SELECT * FROM mytable WHERE id = ?
Do:
SELECT * FROM mytable ORDER BY id LIMIT 1 OFFSET ?
Note that OFFSET is zero-based, so you may need to subtract 1 from the variable you're indexing in with.
If you want to reclaim deleted row ids the VACUUM command or pragma may be what you seek,
http://www.sqlite.org/faq.html#q12
http://www.sqlite.org/lang_vacuum.html
http://www.sqlite.org/pragma.html#pragma_auto_vacuum