I'm feeling a little stupid, and my searches for answers haven't yielded anyone else having this problem.
Imagine that I have NodeHQ, Node1 and Node2. I have created triggers to synchronize TableA between the 3 like so:
Node1 <---> NodeHQ <---> Node2
Node1 and Node2 have different subsets of data from each other. NodeHQ has administrative information from both nodes (subsets of both). Each of the 3 nodes is in a different NODE_GROUP.
Right now, with the triggers and routers I have setup, Inserting/Updating/Deleting a record at NodeHQ works at Node1 and Node2. However, if I make a change at Node1 or Node2, it only makes it to NodeHQ. It never passes through to the other.
So far I've tried:
Setting SYNC_ON_INCOMING_BATCH to 1 for the triggers involved, no change
Creating separate SYM_TRIGGER for each NODE_GROUP, no change
Using transforms to alter the record innocuously, no change
Deleting and then Inserting all of the rules, no change
Using Symadmin sync-triggers -f to force trigger recreation, no change
I've read the user guides up and down on this, and they are relatively unspecific about this. http://www.symmetricds.org/doc/3.6/user-guide/html/advanced.html#bi-direction-sync
Right now, all of the nodes have SYNC_ENABLED=1. All of the SYM_TRIGGERs are set for SYNC_ON_INCOMING_BATCH=1. My SYM_ROUTERs are all set to SYNC=1, and are using ROUTER_TYPE='default'. I'll be honest, I've tried a lot of other small things, but nothing seems to make it pass on data to the next NODE_GROUP. I'm running out of ideas.
Their own documentation indicates that SYNC_ON_INCOMING_BATCH makes it so that trigger passes data onto other nodes at each place it arrives. So far though, my changes to that have yielded nothing. What's left to try? Or what do you think I should do?
I am using Firebird 2.52 and SQL Dialect 1.
So in running version 3.7.19 of SymmetricDS in debug, I discovered the triggers weren't being regenerated properly in most circumstances that I was changing SYM tables. Whenever I changed the rules, the logs indicated it was remaking relating triggers.
The solution: Running symadmin sync-triggers -f on every engine. This forces every single trigger to be regenerated, and it seems to have fixed this. I'll definitely track this down to help the developers nip it in the bud.
Related
I made a zabbix template and scripts for my OpenVPN server. This template generates VPN users (as discovered nodes) from certificate list. Every node is monitored for traffic, up/down, uptime. All users are placed in VPNUsers group.
I'm trying to make trigger to raise alert when new users added or removed. I studied documentation and found groupsum function, but can't figure out how use it to compare current sum and previous.
Is it possible ?
The discovery part was not clear to me, but grpsum is an aggregate item function. The values would be stored in an item, and then on top of that you'd create a trigger with, for example, abschange()<>0 trigger function.
Does anyone know what is a best practice to overwrite records under Google DNS Cloud, using API? https://cloud.google.com/dns/api/v1/changes/create does not help!
I could delete and create, but it is not nice ;) and could cause an outage.
Regards
The Cloud DNS API uses Changes objects to perform the update actions; you can create Changes but you don't ever delete them. In the Cloud DNS API, you never operate directly on the resource record sets. Instead, you create a Changes object with your desired additions and deletions and if that is created successfully, it applies those updates to the specified resource record sets in your managed DNS zone.
It's an unusual mental model, sort of like editing a file by specifying a diff to be applied, or appending to the commit history of a Git repository to change the contents of a file. Still, you can certainly achieve what you want to do using this API, and it is applied atomically at the authoritative servers (although the DNS system as a whole does not really do anything atomically, due to caching, so if you know you will be making changes, reduce your TTLs before you make the changes). The atomicity here is more about the updates themselves: if you have multiple applications making changes to your managed zones, and there are conflicts in changes to the particular record sets, the create operation will fail, and you will have retry the change with modified deletions (rather than having changes be silently overwritten).
Anyhow, what you want to do is to create a Changes object with deletions that specifies the current resource record set, and additions that specifies your desired replacement. This can be rather verbose, especially if you have a domain name with a lot of records of the same type. For example, if you have four A records for mydomain.example (1.1.1.1, 2.2.2.2, 3.3.3.3, and 4.4.4.4) and want to change the 3.3.3.3 address to 5.5.5.5, you need to list all four original A records in deletions and then the new four (1.1.1.1, 2.2.2.2, 4.4.4.4, and 5.5.5.5) in additions.
The Cloud DNS documentation provides example code boilerplate that you can adapt to do what you want: https://cloud.google.com/dns/api/v1/changes/create#examples, you just need to set the deletions and additions for the Changes object you are creating.
I have never used APIs for this purpose, but if you use command line i.e. gcloud to update DNS records, it binds the change in a single transaction and both tasks of deleting the record and adding the updated record are executed as a single transaction. Since transactions are atomic in nature, it shouldn't cause any outage.
Personally, I never witnessed any outage while using gcloud for updating DNS settings for my domain.
Basically I've got a Service which can work with two alternatives of ResourceSets. Let's say, the Service would optimally work with one Doctor and one Nurse, but it is also possible to work with only one Doctor if a Nurse isn't available.
Now, assuming the Doctor works slower without a Nurse, the Service's delay time must depend upon the resourceSet being employed at the moment (Doctor+Nurse or Doctor). Any idea how can I program this?
You should also have in mind that my model has various Services in parallel working in the same way, it's not just only one Service line.
Thanks!
You're using Services but, to me, using the combination of Seize, Delay and Release gives you more flexibility.
What I've done is set the resource choice according to the image bellow:
It is important to have the nurses prior to the doctors in the first set (for some reason anylogic would opt for using only the doctor if otherwise - even with a nurse available).
Than, I would write this code:
Which means that if the agent was only able to seize one resource it will take longer (15 is just a random value).
In the delay block, I would set the processing time to agent.processTime
The topology I'm using is this:
Obviously this is a workaround and will not work for every case. You can always change the conditions you verify. I couldn't find a way to check which resource set was picked by the seize operation. If you're in a hurry this will do the trick.
Hope that helps,
Luís
My company has a couple of joblets that we put in new jobs to do things like initialization of variables, get system information from the database and sending out error / warning emails. The issue we are running into is that if we go ahead and start creating the components of a job and realize that we forgot to include these 3 joblets, we have to basically re-create the job to ensure that the joblets are added first so they run first.
Is there any way to force these joblets to run first and possibly also in a certain order before moving on to the contents of the job being created? Please let me know if there is any information you may need that I'm missing as I have only been using Talend for a few days. The rest of the team has not been using it too much longer than I have, so they do not have the answer I'm looking for either. Thanks in advance!
In Joblets you can use the components Trigger_Input and Trigger_Output as connection-points for on subjob OK triggers. So you can connect joblets and other components in a job with triggers. Thus enforcing execution order.
But you cannot get a on subjob OK trigger from a tPreJob. I am thinking on triggering from a tPreJob to a tWarn (on component OK) and then from tWarn to the joblet (on subjob OK).
Even after deleting containers and objects directly from file system, Swift is listing the containers when executed GET command on it. However, if we try to delete the container with DELETE command then 404: Not Found error message is returned. Please explain whether there is something wrong or is there some kind of cache?
I think the problem came from deleting the containers and/or objects directly from the file system.
Swift's methods for handling write requests for object and container have to be very careful to ensure all the distributed index information remains eventually consistent. Direct modification of the file system is not sufficient. It sounds like the container databases got removed before they had a chance to update the account databases listings - perhaps manually unlinked before all of the object index information was removed?
Normally after a delete request the containers have to hang around for awhile as "tombstones" to ensure the account database gets updated correctly.
As a work around you could recreate them (with a POST) and then re-issue the DELETE; which should successfully allow the DELETE of the new empty containers and update the account database listing directly.
(Note: the container databases themselves, although empty, will still exist on disk as tombstones until the the reclaim_age passes)