How to detect if tableview was changed - postgresql

I have following idea: user can edit database and when he will press exit from database he will be asked if he want to save edited database or if he edited something and then turned it back he wont be asked.
I think i should compare created database and database after editing when user press exit, but don't know how.
This is my code for creating database
model = new QSqlRelationalTableModel(this, *db);
model->setTable("cv");
model->setFilter("cv_id = "+currentCV+"");
model->removeColumns(0,1);
model->select();
ui->tableView->show();
ui->tableView->setModel(model);

You have 2 options.
Create event triggers in the database. Event triggers work when users change database table structures or create tables or change columns. Thus, these triggers work when the user executes any DDL commands. (change table, add column, create index, drop table, etc.) You can insert these commands into your log tables using event triggers.
Your database structures (all tables, columns, indexes, etc.) are stored in information_shema. You can select the data of these tables and save it somewhere and then compare it with the data that has changed.

Take in mind that no change to database is performed before you're submitting the user changes. By default, sumbit is performed when changes occur, and this does not help you. But, you can set
model.setEditStrategy(QSqlTableModel.EditStrategy.OnManualSubmit)
At this point, changes are persisted only when the method model.submitAll() is called.
All you have to do, at this point, is exploiting dataChanged signal of your model and use a flag variable to check if changes have been operated:
data_changed = False
def on_data_changed()
data_changed = True
model.dataChanged.connect(on_data_changed)
Now you can adjust the logic of the code to serve your specific purpose

Related

A way to know if a Firebird table's data has changed without using a trigger

Is there a way of knowing that a table's data has changed (insert/update/delete) without using a trigger on that table? Perhaps a global trigger to indicate changes on a table?
If you want notification of changes, you will need to add a trigger yourself. Firebird 3 added a new feature to simplify identifying changed rows, the pseudo-column RDB$RECORD_VERSION. This pseudo-column contains the transaction that created the current version of a row.
Alternatively, you could try and use the trace facility to monitor for changes, but that is not an out of the box solution, as you will need to write the necessary logic to parse the trace output (and take things like transaction commit/rollback into account).

EF Code First and Saving Multiple Records

Just getting to grips with Entity Framework and I can save, add, delete etc a single entity like so:
db.Entry(client).State = EntityState.Modified;
db.SaveChanges();
My question is if I wanted to change several records how should I do this, for example I want to select all Jobs with a Type of 'new' and set the Type to 'complete'. I can select all the jobs easily enough with Linq but do I have to loop through, change them, set the state to modified, save changes, next one etc? I'm sure there is a straightforward way that I just don't know about or managed to find yet.
Are you sure you need to set the EntityState? SaveChanges will DetectChanges before saving. You can't update multiple records at once as you are requesting, but you can loop through, update the value and call save changes after the loop. This will cause 1 connection to the database where all of your updated records will be saved at once.
Yes, you would have to loop through each object, but you don't have to save changes after each one. You can save changes after all the changes have been made and update in a single go. Unless there is some reason you need to save before editing the next record.
However, if you have a simple operation like that, you can also just issue a SQL statement to do it.
context.Table.SqlQuery("UPDATE table set column = 1 where column2 = 3");
Obviously, that more or less bypasses the object graph, but for a simple batch statement there's nothing wrong with doing that.

iOS - Create Database Schema (Run code only once)

I'm using FMDB for my iPhone App database and i want to create the database and tables schema only once.
How can i run OBJC code when the user installs or updates the app?
Kinds Regards
You can set a boolean value in NSUserDefaults - NSUserDefaults is only reset when the user deletes the app, so you have some code that executes if a particular boolean value is not found in the user defaults (and then saves that value after execution to prevent it from being run again).
That will cover your plain 'run code once upon install' scenario - you can achieve the same for updates with a similar approach, but utilising the CFBundleVersion variable (which will be different for each version of your app).
First of all, you might not want to think about executing something during upgrading, because it's not possible. Like #lxt suggested, you can store a value in the preference to indicate database version, but it might not be bulletproof.
A common approach to solve this problem is to use self-built meta-data. When you first created the database, you should create an extra table named "metadata" or "properties", with two varchar columns, "name" and "value". You insert one row, ("database_ver", "1").
In your database layer (or adapter) class, you create an "open" method to handle opening. Within this method, you first run select database_ver from metadata; to check database version. If nothing is fetched, you run table creation scripts, and insert database_ver=1 row.
Later on if you upgraded your table format, provide alter table statements for each version, and run them based on database_ver. For installations after the upgrade, you can use the updated create table statements, then set "database_ver" to "2" (or above) directly, without going through alter table.
Compared to storing value in the preference, it's actually more common to store it in the database itself. Because even if the user backed up the file somewhere, or skipped a version, you can still tell the format of the database by its metadata table.
FMDB has no problem running such mechanism.

Oracle Global Temporary Tables and using stored procedures and functions

we recently changed one of the databases I develop on from Oracle accounts to LDAP login accounts and all went well for the front end used by the staff that access the system. However, we have a second method of entry restricted to admin staff that load the data onto the database and a lot of processing is called using the dbms_scheduler.
Most of the database tables have a created_by column which is defaulted to pick up their user name from a sys_context but when the data loads are run from dbms_scheduler this information is not available and hence the created_by columns all get populated with APP_GLOBAL.
I have managed to populate a Global Temporary Table (GTT) with the sys_context value and use this to populate the created_by from a stored procedure called by dbms_scheduler so my next logical step was to put this in a function and call it so it could be used throughout the system or even be referenced from a before insert trigger.
The problem is, when putting the code into a function the data from the GTT is not found. The table is set to preserve rows.
I have trawled many a site for an answer but have found nothing to help me can anyone here provide a solution?
The scheduler will be using a different session than the session that created the job - preserve rows will not make the GTT data visible in a different session.
I am assuming the created_by columns have a default value like nvl(sys_context(...),'APP_GLOBAL'). Consider passing the user name as a parameter to the job and set the context as the first step in the job.
A weekend off and a closer look at my code showed a fatal flaw in my syntax where the selection of data from the GTT would never happen. A quick tweak and recompile and all is well.
Jack, thanks for your help.

How do you manage concurrent access to forms?

We've got a set of forms in our web application that is managed by multiple staff members. The forms are common for all staff members. Right now, we've implemented a locking mechanism. But the issue is that there's no reliable way of knowing when a user has logged out of the system, so the form needs to be unlocked. I was wondering if there was a better way to manage concurrent users editing the same data.
You can use optimistic concurrency which is how the .Net data libraries are designed. Effectively you assume that usually no one will edit a row concurrently. When it occurs, you can either throw away the changes made, or try and create some nicer retry logic when you have two users edit the same row.
If you keep a copy of what was in the row when you started editing it and then write your update as:
Update Table set column = changedvalue
where column1 = column1prev
AND column2 = column2prev...
If this updates zero rows, then you know that the row changed during the edit and you can then deal with it, or simply throw an error and tell the user to try again.
You could also create some retry logic? Re-read the row from the database and check whether the change made by your user and the change made in the database are able to be safely combined, then do so automatically. Or you could present a choice to the user as to whether they still wish to make their change based on the values now in the database.
Do something similar to what is done in many version control systems. Allow anyone to edit the data. When the user submits the form, the database is checked for changes. If the record has not been changed prior to this submission, allow it as usual. If both changes are the same, ignore the incoming (now redundant) change.
If the second change is different from the first, the record is now in conflict. The user is presented with a new form, which indicates which fields were changed by the conflicting update. It is then the user's responsibility to resolve the conflict (by updating both sets of changes), or to allow the existing update to stand.
As Spence suggested, what you need is optimistic concurrency. A standard website that does no accounting for whether the data has changed uses what I call "last write wins". Simply put, whichever connection saves to the database last, that version of the data is the one that sticks. In optimistic concurrency, you use a "first write wins" logic such that if two connections try to save the same row at the same time, the first one that commits wins and the second is rejected.
There are two pieces to this mechanism:
The rules by which you fail the second commit
How the system or the user handles the rejected commit.
Determining whether to reject the commit
Two approaches:
Comparison column that changes each time a commit happens
Compare the data with its committed version in the database.
The first one entails using something like SQL Server's rowversion data type which is guaranteed to change each time the row changes. The upside is that it makes it simple to roll your own logic to determine if something has changed. When you get the data, you pull the rowversion column's value and when you commit, you compare that value with what is currently in the database. If they are different, the data has changed since you last retrieved it and you should reject the commit otherwise proceed to save the data.
The second one entails comparing the columns you pulled with their existing committed values in the database. As Spence suggested, if you attempt the update and no rows were updated, then clearly one of the criteria failed. This logic can get tricky when some of the values are null. Many object relational mappers and even .NET's DataTable and DataAdapter technology can help you handle this.
Handling the rejected commit
If you do not leave it up to the user, then the form would throw some message stating that the data has changed since they last edited and you would simply re-retrieve the data overwriting their changes. As you can imagine, users aren't particularly fond of this solution especially in a high volume system where it might happen frequently.
A more sophisticated (and also more complicated) approach is to show the user what has changed allow them to choose which items to try to re-commit, Behind the scenes you would retrieve the data again, overwrite the values picked by the user with their entries and try to commit again. In high volume system, this will still be problematic because by the time the user has tried to re-commit, the data may have changed yet again.
The checkout concept is effectively pessimistic concurrency where users "lock" rows. As you have discovered, it is difficult to implement in a stateless environment. Users are notorious for simply closing their browser while they have something checked out or using the Back button to return a set that was checked out and try to recommit it. IMO, it is more trouble than it is worth to try go this route in a web-based solution. Assuming you write the user name that last changed a given row, with optimistic concurrency, you can inform the user whose changes are rejected who saved the data before them.
I have seen this done two ways. The first is to have a "checked out" column in your database table associated with that data. Your service would have to look for this flag to see if it is being edited. You can have this expire after a time threshold is met (with a trigger) if the user doesn't commit changes. The second way is having a dedicated "checked out" table that stores id's and object names (probably the table name). It would work the same way and you would have less lookup time, theoretically. I see concurrency issues using the second method, however.
Why do you need to look for session timeout? Just synchronize access to your data (forms or whatever) and that's it.
UPDATE: If you mean you have "long transactions" where form is locked as soon as user opens editor (or whatever) and remains locked until user commits changes, then:
either use optimistic locking, implement it by versioning of forms data table
optimistic locking can cause loss of work, if user have been away for a long time, then tried to commit his changes and discovered that someone else already updated a form. In this case you may want to implement explicit "locking" of form, where user "locks" form as soon as he starts work on it. Other user will notice that form is "locked" and either communicate with lock owner to resolve issue, or he can "relock" form for himself, loosing all updates of first user in process.
We put in a very simple optimistic locking scheme that works like this:
every table has a last_update_date
field in it
when the form is created
the last_update_date for the record
is stored in a hidden input field
when the form is POSTED the server
checks the last_update_date in the
database against the date in the
hidden input field.
If they match,
then no one else has changed the
record since the form was created so
the system updates the data.
If they don't match, then someone else has
changed the record since the form was
created. The system sends the user back to the form edit page and tells the user that someone else edited the record and they must reapply their changes.
It is very simple and works well enough.
You can use "timestamp" column on your table. Refer: What is the mysterious 'timestamp' datatype in Sybase?
I understand that you want to avoid overwriting existing data with consecutively updates.
If so, when the user opens a screen you have to get last "timestamp" column to the client.
After changing data just before update, you should check the "timestamp" columns(yours and db) to make sure if anyone has changed tha data while he is editing.
If its changed you will alert an error and he has to startover. If it is not, update the data. Timestamp columns updated automatically.
The simplest method is to format your update statement to include the datetime when the record was last updated. For example:
UPDATE my_table SET my_column = new_val WHERE last_updated = <datetime when record was pulled from the db>
This way the update only succeeds if no one else has changed the record since the last read.
You can message to the user on conflict by checking if the update suceeded via a SELECT after the UPDATE.