SQL server replication over replication - merge

I have the following scenario :
-Server 1 have a data_server1 data base and a transactional publication (over internet) called TANS_PUB (9 tables article)
-Server 2 is a subscriber for TANS_PUB and have a local data base "data_server2"
Note: data_server1 and data_server2 have the same structure (schema)
and the trans replication work very well
Now in server2 i created a merge publication (over internet and for all tables as articles) called MERG_PUB and i make server1 a subcriber to it. this merge pub is from data_server2 to data_server1_2 in server1. this replication also work very well.
the problem is :
if one of the 9 table (in server1) (exemple TAB1), is updated manualy or by program, TAB1 in server2 is updated (by the replication based on TANS_PUB), but TAB1 in data_server1_2 (server1) is not update :-( ( in this case the MERG_PUB is not working), note if i update TAB1 in server2 manualy or by program TAB1 in data_server1_2 (server1) is well updated !!!!!
Can you help please ????
1000 thanks

It sounds like you are utilizing the republisher model with both Transactional and Merge Replication and updates originating upstream are not making it all the way downstream. In this model, by default, the Distribution Agent does not fire the Merge triggers when performing inserts/updates/deletes, and as a result the changes are not recorded in the Merge tracking tables so they never get replicated to the Merge subscribers.
To alleviate this problem please set the Merge article property #published_in_tran_pub to true for all Merge articles participating in the Transactional publication.
USE MergePublicationDB
EXEC sp_changemergearticle
#publication = 'MergePublicationName',
#article = 'MergeArticleName',
#property = N'published_in_tran_pub',
#value = N'true'
GO

Related

Neo4j 3.0.6 Uniqueness constraint being violated with MERGE

I am running Neo4j 3.0.6, and am importing large amounts of data into a fresh instance from multiple sources. I enforce uniqueness using the following constraint:
CREATE CONSTRAINT ON (n:Person) ASSERT n.id IS UNIQUE
I will then import data and relationships from multiple sources and multiple threads:
MERGE (mother:Person{id: 22})
MERGE (father:Person{id: 55})
MERGE (self:Person{id: 128})
SET self += {name: "Alan"}
MERGE (self)-[:MOTHER]->(mother)
MERGE (self)-[:FATHER]->(father)
Meanwhile, on another thread, but still on the same Neo4j server and bolt endpoint, I'll be importing the rest of the data:
MERGE (husband:Person{id: 55})
MERGE (self:Person{id: 22})
SET self += {name: "Phyllis"}
MERGE (self)-[:HUSBAND]->(husband)
MERGE (wife:Person{id: 22})
MERGE (self:Person{id: 55})
SET self += {name: "Angel"}
MERGE (self)-[:WIFE]->(wife)
MERGE (brother:Person{id: 128})
MERGE (self:Person{id: 92})
SET self += {name: "Brian"}
MERGE (self)-[:BROTHER]->(brother)
MERGE (self)<-[:BROTHER]-(brother)
Finally, if I run the constraint command again, I get this:
Unable to create CONSTRAINT ON ( Person:Person ) ASSERT Person.id IS UNIQUE:
Multiple nodes with label `Person` have property `id` = 55:
node(708823)
node(708827)
There is no guarantee which order the records will be processed in. What ends up happening is multiple records for the same (:Person{id}) get created, but only one gets populated with name data.
It appears there is a race condition in Neo4j that if two MERGE's happen for the same id at the same time, they both will be created. Is there a way to avoid this race condition? Is there a way to establish the necessary locks?
Possible duplicate: Neo4J 2.1.3 Uniqueness Constraint Being Violated, Is This A Bug? But this is for CREATE and this google groups answer indicates that CREATE behaves differently than MERGE in respect to constraints.
I understand that you can get an implicit lock on some node and then use that for synchronization, but I think that effectively serializes the processing, so the processing won't really be processed concurrently.
Overall I think a better approach would be to abandon processing the same kind of data in multiple threads and just do a single import on one thread to MERGE :Persons and set their properties.
After that's imported, then you can process the creation of your relationships, with the understanding that you'll be MATCHing instead of MERGEing on :Persons.

`collection.sync()` doesn't work as expected [Kinto.js]

I have two clients A and B which performed this operations:
Client A created and .sync()ed a one record collection.
Client B .sync()ed and it received the collection with a single record.
Client A deleted and .sync()ed the collection. At this point there is no collection in both client A (checked via JS api and IndexedDB api) and the server (I checked with http calls).
Client B .sync()ed, but the record is still there.
I don't think this is the intended behavior. What could cause this?
P.S. Client A deletes with virtual: false, because it doesn't need the records in the local db anymore. Might that be it? Does this changes something on the server?
If you use virtual: false you will never notify the server that you have deleted the record.
If you want to sync the deleted record status, you should not use virtual: false. It will get deleted locally after your next sync.

Mongo DB - Lock collection and insert record

I am trying to achieve centralization logic with multiple VM's.
I have one application running on say 5 VM's. Only 1 VM will be responsible for doing one task.
To have that I am writing that VM's host name to the database.But to update that name to database, I had to achieve locking using java client API, As there can be 2-3 VM coming up # the same time.
How to achieve that ?
UPDATE :
I can use findandModify. But my code looks like this
{
if(collection.getCount({"taskName" :"Task1"}) == 0){
//insert record ------ **I can use findAndModify here**
}
}
But if two VM's come up at the same time then both will go inside if block, as the document is not available.
I understand that findAndModify is atomic. Hence After 1 VM issued findAndModify command we will have 1 record with hostname. But the next VM also will do the same operation and updates the record again with its hostname.
Please let me know if I am not clear with my question.
Updates per document in mongodb are atomic. So you do not have to implement a locking mechanism on the client side. As the comment states check here for your use case.
Update
On the query part you can check if document has "hostname" with $exists. If document with Task1 has hostname, it means there is already a VM responsible for the task. I do not know your whole use case but lets say, that VM1 is finished with the task it can update document to remove the field "hostname". Or you can use for hostname a default value like '' and check if hostname with '' and taskName with 'Task1'.

Tridion OutboundEmail - Contact synchronization from several Presentation servers?

I'm facing a problem with OutboundEmail Synchronization for Contacts.
We have the following scenario : 2 load-balanced CMS servers and 3 load-balanced CDE web servers located in different data centers.
Each CDE web server will have it's own SQL server for broker DB and OutboundEmail Subscription + Tracking DB.
If I install local OutboundEmail Subscription + Tracking DB on each CDE, how can I process the Contacts Synchronization from the 3 CDE servers, knowing that for a specific Tridion publication you can only specify 1 synchronization target containing 1 url to profilesync.aspx ?
And idem for Tracking Synchronization.
I must be missing something ...
Any suggestion please?
This scenario is currently not supported, we do support multiple presentation servers but as you mentioned you can only specify one synchronization target under a publication
without going into detail there were compelling reasons not to support this scenario at that point in time, but it is on our backlog
I can think of a couple of options to solve it in this version:
use one database, but i'm guessing the reason to split it up over 3 data-centers is for fail-over/redunancy and/or geographical reasons
setup synchronization/tracking on one server and replicate data to the other 2 databases, note that the replication needs to be bi-directional

Merge Replication for new created tables

I have two SQL Server 2008 R2 Standard servers and they are using merge replication, sometimes new tables are created in the subscriber and I want it to be replicated to the publisher.
Is there an option in SQL Server that allows me to replicate the new created table to the publisher or I have to make my custom procedure to do this.
If you have other suggestion (Like use something else other merge replication) you are welcome
Note: some clients are connected to the subscriber and others to the publisher and no I can't shift all the clients to the publisher.
steps are the following:
Create the table on the publisher
Add the table to the publication (sp_addmerge... - I forgot!)
recreate the snapshot
restart subscription
The suscriber will then be updated with the latest additions in the snapshot: only the new table(s) will be sent to the suscriber.
It you still need some help ....