The problem simplified:
I have a DataSet with some datatables...
I have a Winforms DataGrid bound to one of the datatables.
User sticks some rows into said datatable, via the DataGrid, let's say 3 rows;
All three rows now have their RowState = DataRowState.Added.
I now begin a sqlserver transaction.
Then call dataAdapter1.Update(dataSet1) to update rows into SqlServer.
row 1.. OK
row 2.. OK
row 3.. error at the sqlserver level (by design i enforced a unique index)
Upon detecting this error, i Rollback the sqlserver transaction.
I also try to "rollback" the datatable / dataset changes, using either of Dataset1.RejectChanges() and / or Datatable1.RejectChanges().
Problem is neither of .RejectChanges() work the way i envisaged. My datatable now has two rows (row1, row2), whose RowState = DataRowState.Unchanged; row3 has disappeared altogether.
What i want to happen is, when i roll back the sqlserver transaction, for all 3 rows in the datatable to remain in the SAME STATE just prior to the call to dataAdapter1.Update() method.
(Reason is so that the user can look at the error in the bound DataGrid, take corrective action, and attempt the Update again).
Any ideas anyone? i.e. i am looking for something equivalent to rolling back the state at the ADO dataTable level.
Ok, so i figured a way to get around this.
Get a clone of the original datatable, and update the clone.
If an error occurs, you still have the original datatable, with its original DataRowState; Furthermore, you can copy any errors that occur in the clone to the original Datatable, thus reflecting the errors in the datagrid for the user to see.
If update is successful, you simply refresh the original datatable with the clone.
VB Code:
Try
'daMyAdapter.Update(dsDataset, "MyDatatable") <-- replace original with below lines.
_dtMyDatatableClone = dsDataset.MyDatatable.Copy()
If _dtMyDatatableClone IsNot Nothing Then
daMyAdapter.Update(_dtMyDatatableClone)
'if you get here, update was successul - refresh now!
dsDataset.MyDatatable.Clear()
dsDataset.MyDatatable.Merge(_dtMyDatatableClone, False, MissingSchemaAction.Ignore)
End If
Catch
'uh oh, put error handler here.
End Try
I had a similar issue with trying to rollback changes to a DataTable that was bound to an Xceed DataGrid. Once the edits were made in the DataGrid, the edited values all become part of the DataRow's Current state. RejectChanges is only applicable for preventing the Proposed row state from becoming Current.
In order to revert the changes for a given row, I wrote a method to overwrite the current row version with the original version. In order to set a version as the Original, you simply call AcceptChanges() on the datatable.
public static void RevertToOriginalValues(DataRow row)
{
if (row.HasVersion(DataRowVersion.Original) && row.HasVersion(DataRowVersion.Current))
{
for (int colIndex = 0; colIndex < row.ItemArray.Length; colIndex++)
{
var original = row[colIndex, DataRowVersion.Original];
row[colIndex] = original;
}
}
}
Related
I'm trying to set new column definitions by calling setColumnDefs using the grid API. This doesn't work as expected. The names of the column headers will not be updated anymore!
See this Plunkr: Version 19.1.x
Version 19.0.0 is latest working version.
See this Plunkr: Version 19.0.0
For me it seems to be a bug!?
In my project I'm using Angular 5 and I notice the same behaviour.
I was able to reproduce your behaviour. The following (dirty) workaround works:
gridOptions.api.setColumnDefs([]);
gridOptions.api.setColumnDefs(newColDefs);
Setting the columnDefs to an empty array and then passing the newColDefs seems to achieve what you are looking for.
I suppose it up to the new way of change-detection on the latest version.
If you will update your code like that:
function updateColDef()
{
let data = [];
columnDefs.forEach(function(colDef) {
colDef.headerName = colDef.headerName + ' X ';
data.push(colDef);
})
data.push( {
headerName: 'New Column',
});
gridOptions.api.setColumnDefs(data);
}
It will work as expected.
Update:
When new columns are set, the grid will compare with current columns and work out which columns are old (to be removed), new (new columns created) or kept (columns that remain will keep their state including position, filter and sort).
Comparison of column definitions is done on 1) object reference comparison and 2) column ID eg colDef.colId. If either the object reference matches, or the column ID matches, then the grid treats the columns as the same column.
In the first case, it was an object comparison, on the second sample (after update) its colId case.
changes came from 19.1 version release
AG-1591 Allow Delta Changes to Column Definitions.
I have a form that loads a single record. The user does what they need to do on the form...in this case, they enter a date, and a button becomes available to click to advance the record to the next step in the process.
I have a public function that is logging the activity to tblActivity, and sets the record's new Status and Location. This Function takes 3 variables, and was working fine until today.
'I'm calling the function with this line from the button's Click event
LogActivity 15, Screen.ActiveForm, Me.Recordset
Public Function LogActivity(ByVal lSID As Long, Optional fForm As Form, Optional ByRef fRS As Recordset)
With fRS
Do Until .EOF
Debug.Print .Fields(5)
.MoveNext
Loop
End With
...
End Function
This should be printing the form's Status value, but fRS is passed in with no values. The form's recordset has values prior to being passed as the form has data. Some how it is getting lost in the pass. This was working fine, I have multiple buttons across 5 different forms that all call this same Function. Suddenly today, none of them can pass the recordset. I can think of nothing that was changed that would effect this. Most of the changes recently involved locking down fields and the appearance of buttons at the right time...nothing related to the recordset.
Naturally, this DB is supposed to go live on Monday.
Found the problem.
I had a backup from yesterday that was working fine.
One by one, I went through the changes I logged from yesterday and found that by changing some fields to .enabled = False and .locked = True is what was doing it. Apparently that was enough to clear all the values when passing.
Left the fields enabled, just locked them and it passes all values correctly.
Even though this was a failure on my part, I'll leave this up in case some one else makes the same mistake I made.
**** Update ****
I also found out that if I did a
fRS.movelast
fRS.movefirst
before anything else, it found the data. Not sure why it started happening, but these two things seem to have fixed it completely.
I'm building a form with Yii that updates two models at once.
The form takes the inputs for each model as $modelA and $modelB and then handles them separately as described here http://www.yiiframework.com/wiki/19/how-to-use-a-single-form-to-collect-data-for-two-or-more-models/
This is all good. The difference I have to the example is that $modelA (documents) has to be saved and its ID retrieved and then $modelB has to be saved including the ID from $model A as they are related.
There's an additional twist that $modelB has a file which needs to be saved.
My action code is as follows:
if(isset($_POST['Documents'], $_POST['DocumentVersions']))
{
$modelA->attributes=$_POST['Documents'];
$modelB->attributes=$_POST['DocumentVersions'];
$valid=$modelA->validate();
$valid=$modelB->validate() && $valid;
if($valid)
{
$modelA->save(false); // don't validate as we validated above.
$newdoc = $modelA->primaryKey; // get the ID of the document just created
$modelB->document_id = $newdoc; // set the Document_id of the DocumentVersions to be $newdoc
// todo: set the filename to some long hash
$modelB->file=CUploadedFile::getInstance($modelB,'file');
// finish set filename
$modelB->save(false);
if($modelB->save()) {
$modelB->file->saveAs(Yii::getPathOfAlias('webroot').'/uploads/'.$modelB->file);
}
$this->redirect(array('projects/myprojects','id'=>$_POST['project_id']));
}
}
ELSE {
$this->render('create',array(
'modelA'=>$modelA,
'modelB'=>$modelB,
'parent'=>$id,
'userid'=>$userid,
'categories'=>$categoriesList
));
}
You can see that I push the new values for 'file' and 'document_id' into $modelB. What this all works no problem, but... each time I push one of these values into $modelB I seem to get an new instance of $modelA. So the net result, I get 3 new documents, and 1 new version. The new version is all linked up correctly, but the other two documents are just straight duplicates.
I've tested removing the $modelB update steps, and sure enough, for each one removed a copy of $modelA is removed (or at least the resulting database entry).
I've no idea how to prevent this.
UPDATE....
As I put in a comment below, further testing shows the number of instances of $modelA depends on how many times the form has been submitted. Even if other pages/views are accessed in the meantime, if the form is resubmitted within a short period of time, each time I get an extra entry in the database. If this was due to some form of persistence, then I'd expect to get an extra copy of the PREVIOUS model, not multiples of the current one. So I suspect something in the way its saving, like there is some counter that's incrementing, but I've no idea where to look for this, or how to zero it each time.
Some help would be much appreciated.
thanks
JMB
OK, I had Ajax validation set to true. This was calling the create action and inserting entries. I don't fully get this, or how I could use ajax validation if I really wanted to without this effect, but... at least the two model insert with relationship works.
Thanks for the comments.
cheers
JMB
Consider the following code.
var items = from i in context.Items
select i;
var item = items.FirstOrDefault();
item.this = "that";
item.that = "this";
var items2 = from i in context.Items
where i.this == "that"
select i;
var data = items2.FirstOrDefault();
context.SaveChanges();
I'm trying to confirm that items2 will not include my modifications to item. In other words, items2's copy of item will not include the unsaved changes.
Have you tried it? =)
By default, your objects are being tracked and cached by the context, so that the objects in your second query actually do reflect changes in the first.
You may want to call context.Items.AsNoTracking() on the one of your two "items" to get the behavior you want.
Edit: Actually, this is a strange question. I just noticed that your items2 hasn't even hit the database yet, since you haven't called ToList() or FirstorDefault(). It remains an IQueryable that will hit the database after your code snippet and will therefore contain the changed value.
HOWEVER, if you call ToList() on items2, you'll encounter the caching scenario I outlined above.
In case of "var item" your query is executed the moment you used FirstOrDefault(). But for var items2 the query is still not executed. Now in your case result of items2 will always be affected by the updates you have done in the first query.
It will contain the modifications, only way to do is create a new context and query the new context.
I need to delete all the rows in a datatable (ADO.net). I dont want to use Foreach here.
In one single how can I delete all the rows.
Then I need to update the datatable to database.
Note: I have tried dataset.tables[0].rows.clear and dataset.clear but both are not working
Also I dont want to use sql delete query.
Post other than this if you able to provide answer.
'foreach' isn't such a simple answer to this question either -- you run into the problem where the enumerator is no longer valid for a changed collection.
The hassle here is that the Delete() behavior is different for a row in DataRowState.Added vs. Unchanged or Modified. If you Delete() an added row, it does just remove it from the collection since the data store presumably never knew about it anyway. Delete() on an unchanged or modified row simply marks it as deleted as others have indicated.
So I've ended up implementing an extension method like this to handle deleting all rows. If anyone has any better solutions, that would be great.
public static void DeleteAllRows(this DataTable table)
{
int idx = 0;
while (idx < table.Rows.Count)
{
int curCount = table.Rows.Count;
table.Rows[idx].Delete();
if (curCount == table.Rows.Count) idx++;
}
}
I'll just repost my comment here in case you want to close off this question, because I don't believe it's possible to "bulk delete" all the rows in a DataTable:
There's a difference between removing
rows from a DataTable and deleting
them. Deleting flags them as deleted
so that when you apply changes to your
database, the actual db rows are
deleted. Removing them (with Clear())
just takes them out of the in-memory
DataTable.
You'll have to iterate over the rows and delete them one by one. It's only a couple of lines of code.
this works.
for (int i = dtParts.Rows.Count - 1; i >= 0; i--)
dtParts.Rows[i].Delete();
this works with lambda expression:
But here you have to use FOREACH of Lambda. Luckily you have single line here to delete your stuff from datatable :)
myTable.AsEnumerable().ToList().ForEach(m => m.Delete());
db.table.AsEnumerable().ToList().ForEach(e => db.ProductGroupAgreemets.Remove(e));
works for me
DataTable=Datatable.Clone;
That should do it. This will copy the table structure, but not the data. (Datatable.Copy would do that)