How to check if database row value has changed before doing a fill on a dataset - ado.net

When I try to update a database with some modified values in a dataset, a concurrence exception doesn't raise if i change manually some values in the database after the fill method on the dataset is performed. (I only get the exception if I delete a row manually and then I try call the method update of data adapter).
How should I check if I have a "dirty read" on my dataset?.

You have several options.
Keeping a copy of the original entity set to compare against and make
the is dirty determination at the point you need to know whether it's
dirty.
Checking the datarow's rowstate property and if Modified compare
the values in the Current and Original DataRowVersions. If the second
change makes the value the same as the original then you could call
RejectChanges, however that would reject all changes on the row. You
will have to manually track each field since the dataset only keeps
per-row or per-table changes.

Related

Hibernate Envers modified flag behavior

I use Hibernate with Envers and have an entity with some columns annotated with #Audited(withModifiedFlag = true), i.e. there is an additional boolean column in the _AUD table that indicates if the column was updated or not.
If I save a new entity, a corresponding entry is written in the _AUD table with revtype 0. I have a problem with the _MOD colum value. If the column is null the value of the _MOD entry is false and if there is a non-null value the _MOD entry is true. I think for a new entry (i.e. revtype=0) it's more logical to have all _MOD values set to false since the columns haven't been modified. Is there a way to achieve that?
The main reason those _MOD fields end up being set for values that are inserted initially is because all of the prior entity state is null and those comparisons yield differences (e.g. non-null != null) and therefore its seen as having been modified. The feature does not take into account whether the operation being performed is an INSERT, UPDATE, or a DELETE.
Personally, I find the current behavior more logical.
For that initial ADD operation, changing the behavior will force you to have some branch logic to deal with seed value changes based on revision number of revision type where-as simply using the _MOD field behavior as it is today implies you can simply ignore the revision type/number and just use the toggles on any query.
Unfortunately you cannot toggle this behavior presently.
We could look at adding a configuration parameter that would allow you to influence whether the ADD operation should be treated as a modification or not. If its something that you and others find useful, please feel free to open a JIRA here.

How to tell if a DataRow is dirty

Additional rows appended to a DataTable are returned when the RowStateFilter is DataViewRowState.ModifiedCurrent, even if they have not been edited by the user.
And the DataTable RowChanged event fires when the DataTable is first populated by a select from the database, before any edits have taken place.
Is there any convenient way to tell if a row is actually dirty?
You can keep a copy of the original record to compare against and make the is dirty determination at the point you need to know whether it's dirty.
Check the datarow's rowstate property and if Modified compare the values in the Current and Original DataRowVersions.
See the following stack answer which includes code that may give you some pointers on how to implement this.

TSQL - force update of persisted computed column

I have a persisted computed column in one table with the value calculated using a user function. How can I force that column to be updated without updating any other column in that table?
UPDATE: So as it turns out, this will not work as I imagined it.
I wanted to have user function that contains sub-query in it, gets me some data and stores it in computed column. But SQL Server won't allow this...
It looks like I will have to do something similar with insert/update triggers.
If you persist the value by adding the PERSISTED keyword, the value is both retained on insert and will be synchronized when the referenced column is updated.

How do I ignore the created column on a Zend_DB table save?

how would I ignore having Zend_DB save() from trying to fill out a created column? I do not need that column for a certain model.
Don't send the data. save() is part of the Zend_Db_Table_Row api and is designed to be somewhat intelligent in the way it saves data to a row. It will perform an insert or an update of a row depending on what is required.
save() will also only update the columns that it has data for. If you don't send new data for your created column save() won't overwrite the data.
When ever it is possible I let the database I'm using create and update the columns for created and updated. That way I have the information available to query if I need it but I don't have to do something with PHP that My database can do better.
Check out http://framework.zend.com/manual/1.12/en/zend.db.table.html Section "Advanced usage".
For more specific and optimized requests, you may wish to limit the
number of columns returned in a row or rowset. This can be achieved by
passing a FROM clause to the select object. The first argument in the
FROM clause is identical to that of a Zend_Db_Select object with the
addition of being able to pass an instance of Zend_Db_Table_Abstract
and have it automatically determine the table name.
Important
The rowset contains rows that are still 'valid' - they simply contain
a subset of the columns of a table. If a save() method is called on a
partial row then only the fields available will be modified.
So, if you called an update() I think it would be as simple as unsetting the value for the column you don't want to touch. Of course database constraints will need to be honored - i.e. column should allow nulls.

How can I (partially) automate the transfer of a FileMaker database structure and field contents to a second database?

I'm trying to copy some field values to a duplicate database. One record at a time. This is used for history and so I can delete some records in the original database to keep it fast.
I don't want to manually save the values in a variable because there are hundreds of fields. So I want to go to the first field, save the field name and value and then go over to the other database and save the data. Then run a 'Go to Next Field' and loop through all the fields.
This works perfectly, but here is the problem: When a field is a calculation you cannot tab into it and therefore 'Go to Next Field' doesn't work. It skips it.
I though of doing a 'Go to Object' but then I need to name all the objects and I can't find a script to name objects.
Can anyone out there think of a solution?
Thanks!
This is one of those problems where I always found it easier to do an export/import.
Export all the data you want from the one database, and then import it into the other database. All you need to do is:
Manually specify which fields you want to copy
Map the data from the export to the right fields in the new database/table
You can even write a script to do these things for you.
There are several ways to achieve this.
To make a "history file", I have found there are several cases out there, so lets take a look.
CASE ONE
Single file I just want to "keep" a very large file with historical data, because I need to erease all data in my Main file.
In this case, you should create a "clone" table (in the same file ore in other file, is the same). Then change any calculation field to the type of the calculation result (number, text, date, an so on...). Remove any "auto entered value or calculation from any field, like auto number, auto creation date, etc..). You will have a "Plain Table" with no calculations or auto entered data.
Then add a field to control duplicate data. If you have lets say an invoice number (unique) for each record, you can do this to achieve this task. But if you do not have a unique field that identifies the record as unique, then you have to create one...
To create such a field, I recommed to add a new field on the clone table and set as an aunto entered calculation and make a field combination that is unique... somthing like this: invoiceNumber & "-" & lineNumber & "-" " & date.
On the clone table make shure that validation is set up for "always", and no empty values allowed and that this value is unique.
Once you setup the clone table... then you can import your records, making sure that the auto enty option is on. Yo can do it as many times as you like, new records will be added and no duplicates.
If you want, can make a Script to do the move to historical table all the current records before deleting them.
NOTE:
This technique works fine when the data you try to keep do not have changes over time. This means, once the record is created is has no changes.
CASE TWO
A historical table must be created but some fields are updated.
In the beginnig I thougth a historical data, never changes. In some cases I found this is not the case, like the case I want to track historical invoices but at the same time, keep track if they are paid or not...
In this case you may use the same technique above, but instead of importing data... you must update data based on the "unique" fields that identifiy the record.
Hope this technique helps
FileMaker's FieldNames() function, along with GetField() can give you a list of field names and then their values