Sync Framework Filtering Not Detecting Foreign Key Changes - filtering

I'm having an issue with the Microsoft Sync Framework 2.1 and I'm hitting a stumbling block. To simplify lets say I'm syncing two tables, ConfigSet and ConfigItem, which have the following structure:
|ConfigSet |
|-----------------|
|ConfigSetID (PK) |
|ConfigItemID (FK)|
|ConfigItem |
|-----------------|
|ConfigItemID (PK)|
|ConfigItem.Data |
I'm using a filtering clause for both that's driven by ConfigSetID:
ProvisionerObject.Tables["ConfigSet"].FilterParameters.Add(new SqlParameter("#ConfigSetID", SqlDbType.UniqueIdentifier));
ProvisionerObject.Tables["ConfigSet"].FilterClause = "[side].[ConfigSetID] = #ConfigSetID"
ProvisionerObject.Tables["ConfigItem"].FilterParameters.Add(new SqlParameter("#ConfigSetID", SqlDbType.UniqueIdentifier));
ProvisionerObject.Tables["ConfigItem"].FilterClause = "[side].[ConfigItemID] in (SELECT ConfigSet.ConfigItemID FROM ConfigSet WHERE ConfigSet.ConfigSetID = #ConfigSetID)"
If I then create two ConfigItem records on the server side, 'Item1' & 'Item2', and create one ConfigSet record, 'Set1', that has a foreign key to to 'Item1', and then perform a sync it will work fine and the client will get the 'Set1' record and only 'Item1' from ConfigItem.
If I then perform an update on 'Set1' on the server so that it now points to 'Item2', the sync framework detects the changes in ConfigSet but then throws a foreign key constraint on the client saying the record 'Item2' doesn't exist.
It appears that when syncing ConfigItem that it isn't detecting any changes, because technically there have been none to ConfigItem, but the filter clause would return Item2 had this been syncing for the first time.
I understand that each table is synchronised independently, but is there a way I can force it to pick up the 'Item2' record even though there are no changes to that table? And even better (although I think I'm pushing my luck on this one with what the framework can do!) would be if it could remove 'Item1' on the client since this is technically no longer synchronised/referenced by the client.

unfortunately, sync framework doesnt support partition realignment or rows going in and out of scope.
the simpliest workaround in your scenario is to do a dummy update on item2 to mark it as changed. however, this means this change will also be sent to the other clients having item2, wasted transaction since nothing has really changed.
as for rows in the client that has gone out of scope, you can delete them from client and intercept the changes in the ChangesSelected event and remove the rows from the change dataset so they dont propagate up to the server. or you can simply, clean up the client table and reinitialize based on the new filter value.

Related

Why would LINQ group by results be fewer from Visual Studio compared to SQL Server and Linqpad?

There are other questions similar to mine but they didn't help me. I'm performing what should be a simple Linq group by operation, and in SQL Server Management Studio and Linqpad I get 23,859 results from a table containing 36,102 total records. This is what I believe to be the correct result.
For some reason, when I move my query into my Visual Studio application code, I get 22,463 groups - and I cannot for the life of me figure out why.
I need to group this table's rows based on unique combinations of 8 columns. The columns contain account IDs, person IDs, device IDs, premise IDs, and address columns. Basically, a person can have multiple accounts, multiple premises, multiple devices, and each premise can have it's own address. I know the table design is lacking... it's customer provided and there are other columns that necessitate the format - it should not be relevant to the grouping though.
SQL Server: 23859 groups:
SELECT acct_id, per_id, dev_id, prem_id, address, city, state, postal
FROM z_AccountInfo GROUP BY acct_id, per_id, dev_id, prem_id, address, city, state, postal
ORDER BY per_id
Linqpad: 23859 groups:
//Get all rows...
List<z_AccountInfo> zAccounts = z_AccountInfo.ToList();
//Group them...
var zAccountGroups = (from za in zAccounts
group za by new { za.acct_id, za.per_id, za.dev_id, za.prem_id, za.address, za.city, za.state, za.postal } into zaGroups
select zaGroups).OrderBy(zag => zag.Key.per_id).ToList();
Visual Studio: 22463 groups - WRONG?:
//Intantiate list I can use outside of Entity Framework context...
List<z_AccountInfo> zAccounts = new List<z_AccountInfo>();
using (Entities db = Entities.CreateEntitiesForSpecificDatabaseName(implementation))
{
//Get all rows. Count verified to be correct...
zAccounts = db.z_AccountInfo.OrderBy(z => z.per_id).ToList();
}
// Group the rows. Doesn't work??? 22463 groups?
var zAccountGroups = (from z_AccountInfo za in zAccounts
group za by new { za.acct_id, za.per_id, za.dev_id, za.prem_id, za.address, za.city, za.state, za.postal } into zag
select zag).ToList();
I'm hoping someone can spot a syntax issue or something else I'm missing. Seems like Visual Studio is grouping something.. but it's off by 1396 groups... that's pretty significant.
UPDATE:
sgmoore's comment below put me on the track of making sure the zAccounts list from Linqpad and Visual Studio match. They do not!?! Querying the table in SQL Server shows this data (account / device / premise)
Inspecting the Visual Studio output in Beyond Compare shows the device ID 6106471 being erroneously repeated / duplicated for the 4 bottom rows... meaning there should be 2 groups here, but my query will only see 1...
Since I'm using Entity Framework to query the data in the table in Visual Studio, this makes me think something is wrong with my model but I have no idea what it could be. Beyond compare shows this same issue happening multiple times and explains why the group numbers are off. It's like EF knows there are 8 rows (in this case) - but the field that differentiates them doesn't come through.
I tried truncating the table and re-adding all of the data into it and re-running and the bad behavior persists. Quite confused here - I've never had this kind of issue with Entity Framework before.
I even ran SQL Profiler when VS was executing and trapped the query Entity Framework is firing to populate zAccounts. That query when fired by itself in SQL Server correctly shows the four 7066550 rows. This seems to be squarely on Entity Framework and the ToList() call that populates the full collection - ideas anyone?
Short answer - make sure the table in the Entity Framework model has an Entity Key on a column where the values of the column are unique.
Longer answer - to troubleshoot I ran SQL Profiler to ensure that the query EF was sending to SQL Server was correct - and it was. I ran that query and inspected the results to see the data I was wanting. The problem was my model. I had an Entity Key set on a field that did not contain unique values. My guess is that EF assumes that since the field is set as the Entity Key, the values must be unique. Based on that it somehow indexes or caches the first row where the "id" is and then projects that row's values into query results. That is a bad assumption in my view if there is not a validation check of the field marked as the Entity Key. I realize I'm to blame here for telling it to use a non-unique field as the Entity Key - but I don't see the case where this would be a good idea without it throwing at least a warning.
Anyway, to resolve, I added a proper id column to the table and set it's Identiy spec and auto-increment so that any rows in the table would have a unique id. After that, I updated my edmx to use my new column as the Entity Key and re-ran my code and then everything magically started working.

SSIS deadlock because of the need for parallel updates inside the same data flow task

I am creating a data flow task which will be extracting data from a source table and will be updating a destination table as follows:
1) Use the unique id in the source record to find the record you want to update in the destination table.
2) If the ID does not exist in the destination table, check whether the email of the source record exists in the destination table instead.
a) If the email exists, update the destination record through the email. Also update the unique id of that destination record.
b) If the email does not exist, insert a new record to the destination table.
So, with simple words, I am creating a task that will be updating a table on its unique id and if it does not have a match, it will be attempting to update on its email. If it still does not find a match, it will be inserting a new record.
This means that I will have two updates running in parallel as you can see in the image (the two circled components will be running in parallel)
SSIS_Data_Flow_Task
Now, this generates a deadlock issue because of those two updates.
I have tried using With (NOLOCK) but this hint is for reading data, not updating it. I also have searched for delay tasks to delay one of the two data pipelines until the other is finished.
Any ideas? Could I maybe design my data flow task differently in order to avoid having multiple parallel updates in the first place?
Any help will be greatly appreciated.
with these type of flows i always work with a work table (dest table id, work type (U or I), ...). in a first step i fill a table with work that needs to be done, then i apply the work.

CDC multiple insert/delete of the same identity value

I have a table T that contains an ID set as identity and primary key. I have enabled CDC on the table and then later added an XML field that I didn't care capturing so I did not do anything further (to recreate the capture table and/or migrate old capture data).
I now have a stored procedure that (among other things) updates only the newly created field (no other field) in table T. I notice that instead of recording an update (operation=3 followed by operation=4), CDC records a delete (operation=1) followed by an insert (operation=2) and all fields are the same (of course since none of them was updated)
I actually noticed this because I had the same identity value inserted and/or deleted more than once, which is not possible (unless identity_insert is on, which is not)
Why does CDC record operation=1 instead of 3 and operation=2 instead of 4?
Is this documented anywhere or is it a bug?
The reason you are seeing a Delete/insert pair (Operation number 1/2) as opposed to an update pair (3/4) is because you are updating a "set" of data that ALSO has a unique constraint on your column.
For SQL to make sense of this wihout violating the unique cosntraint, it deletes the row and reinserts it (with the "update").
More information on this. Its not an issue or a defect. its the way SQL works and CDC innocently logs it as it sees it. Remember, CDC is just a subscriber and replicates things as they happen.
If you have a need to see an update you may have to look for the 1/2 "pair" and not ONLY the operation code 3/4.
Some great articles:
Bounded Update is the term used to describe certain types of UPDATE statements from the publisher that will replicate as DELETE/INSERT pairs on the subscriber. We perform a bounded update for every set based update that changes a column that is part of a unique index or constraint. In other words, if an UPDATE statement touches more than one row and modifies a column that is has any UNIQUE constraints, the UPDATE statement is sent to the subscriber as a DELETE/INSERT pair ... read more here
https://support.microsoft.com/en-us/kb/238254

Insert record in table if does not exist in iPhone app

I am obtaining a json array from a url and inserting data into a table. Since the contents of the url are subject to change, I want to make a second connection to a url and check for updates and insert new records in y table using sqlite3.
The issues that I face are:
1) My table doesn't have a primary key
2) The url lists the changes on the same day. Hence, if I run my app multiple times, when I insert values in my database, I get duplicate entries. I want to keep a check for the day duplicated entries that should be removed. The problem can be solved by adding a constraint, but since the url itself has duplicated values, I find it difficult.
The only way I can see you can do it if you have no primary key or something you can use that is unique to each record, is when you get your new data in you go through the new entries where for each one you check if the exact same data exists in the database already. If it doesn't then you add it, if it does then you skip over it.
You could even do something like create a unique key yourself for each entry which is a concatenation of each column of the table. That way you can quickly do the check for if the entry already exists in the database.
I see two possibilities depending on your setup:
You have a column setup as UNIQUE (this can be through a PRIMARY KEY or not). In this case, you can use the ON CONFLICT clause:
http://www.sqlite.org/lang_conflict.html
If you find this construct a little confusing, you can instead use "INSERT OR REPLACE" or "INSERT OR IGNORE" as described here:
http://www.sqlite.org/lang_insert.html
You do not have a column setup as UNIQUE. In this case, you will need to SELECT first to verify for duplicate data, and based on the result INSERT, UPDATE, or do nothing.
A more common & robust way to handle this is to associate a timestamp with each data item on the server. When your app interrogates the server it provides the timestamp corresponding to the last time it synced. The server then queries its database and returns all values that are timestamped later than the timestamp provided by the app. Then it also returns a new timestamp value for the app to store, to use on the next sync.

Oracle 10g: What's a good, academic approach to keeping a record from being updated consecutive times?

We have a table called Contracts. These contract records are created by users on an external site and must be approved or rejected by staff on an internal site. When a contract is rejected, it's simply deleted from the db. When it's accepted, however, a new record is generated called Contract Acceptance which is written to its own table and is derived from data that exists on the contract.
The problem is that two internal staff members may each end up opening the same contract. The first user accepts and a contract acceptance record is generated. Then, with the same contract record still open on the page, the second user accepts the contract again, creating a duplicate acceptance record.
The quick and dirty way to get past this is to retrieve the contract from the db just before it's accepted, check the status, and produce an error message saying that it's already been accepted. This would probably work for most circumstances, but the users could still click the Accept button at the exact same time and sneak by this validation code.
I've also considered a thread lock deep in the data layer that prevents two threads from entering the same region of code at the same time, but the app exists on two load-balanced servers, so the users could be on separate servers which would render this approach useless.
The only method I can think of would have to exist at the database. Conceptually, I would like to somehow lock the stored procedure or table so that it can't be updated twice at the same time, but perhaps I don't understand Oracle enough here. How do updates work? Are update requests somehow queued up so that they do not occur at the exact same time? If this is so, I could check the status of the record in th SQL and return a value in an out parameter stating it has already been accepted. But if update requests aren't queued then two people could still get into the update sql at the exact same time.
Looking for good suggestions on how to go about this.
First, if there can only be one Contract Acceptance per Contract, then Contract Acceptance should have the Contract ID as its own primary (or unique) key: that will make duplicates impossible.
Second, to prevent the second user from trying to accept the contract while the first user is accepting it, you can make the acceptance process lock the Contract row:
select ...
from Contract
where contract_id = :the_contract
for update nowait;
insert into Contract_Acceptance ...
The second user's attempt to accept will then fail with an exception :
ORA-00054: resource busy and acquire with nowait specified
In general, there are two approaches to the problem
Option 1: Pessimistic Locking
In this scenario, you're pessimistic so you lock the row in the table when you select it. When a user queries the Contracts table, they'd do something like
SELECT *
FROM contracts
WHERE contract_id = <<some contract ID>>
FOR UPDATE NOWAIT;
Whoever selects the record first will lock it. Whoever selects the record second will get an ORA-00054 error that the application will then catch and let them know that another user has already locked the record. When the first user completes their work, they issue their INSERT into the Contract_Acceptance table and commit their transaction. This releases the lock on the row in the Contracts table.
Option 2: Optimistic Locking
In this scenario, you're being optimistic that the two users won't conflict so you don't lock the record initially. Instead, you select the data you need along with a Last_Updated_Timestamp column that you add to the table if it doesn't already exist. Something like
SELECT <<list of columns>>, Last_Updated_Timestamp
FROM Contracts
WHERE contract_id = <<some contract ID>>
When a user accepts the contract, before doing the INSERT into Contract_Acceptance, they issue an UPDATE on Contracts
UPDATE Contracts
SET last_updated_timestamp = systimestamp
WHERE contract_id = <<some contract ID>>
AND last_update_timestamp = <<timestamp from the initial SELECT>>;
The first person to do this update will succeed (the statement will update 1 row). The second person to do this will update 0 rows. The application detects the fact that the update didn't modify any rows and tells the second user that someone else has already processed the row.
In Either Case
In either case, you probably want to add a UNIQUE constraint to the Contract_Acceptance table. This will ensure that there is only one row in the Contract_Acceptance table for any given Contract_ID.
ALTER TABLE Contract_Acceptance
ADD CONSTRAINT unique_contract_id UNIQUE (Contract_ID)
This is a second line of defense that should never be needed but protects you in case the application doesn't implement its logic correctly.