How to Create One Instance per Bookmark? - persistence

Background:
We currently use WF 4 and the SQL Workflow Instance Store to persist our workflows at each bookmark. The first time a workflow is persisted, a new record is created in the table "System.Activities.DurableInstancing.InstancesTable". On each subsequent persist, existing records are deleted and a new record inserted.
Question:
How could you modify this behavior so that on each subsequent persist, a new record would be created in the instances table?
Notes:
You can create a custom instance store, but it is "non-trivial" to do so. Is there a way you could use the System.Activities.DurableInstancing.SqlWorkflowInstanceStore class, but customize this behavior?

The InstancesTable contains a record per workflow instance so having multiple records there for the same workflow instance would be very confusing at the very least.
It kind of sounds like you are trying to use the InstancesTable for tracking. If that is the case you should take a look at creating a TrackingParticipant instead.

Related

Cannot repopulate ElectrodeGroup datajoint table

I'm a researcher in Loren Frank's lab at UCSF using datajoint and files in the nwb format. I made some changes to our code for defining entries in our ElectrodeGroup table, and was hoping to test those by deleting an entry in the table and regenerating it with the new code. I was able to delete the entry, but cannot repopulate it. In particular, when I run ElectrodeGroup.populate() or ElectrodeGroup.populate({"nwb_file_name": my_file_name}), no changes are made to the table. I confirmed that the electrode group I deleted and am trying to regenerate is defined in the original nwb file. I am seeking input on why the populate command seems to not be working here. Thanks in advance for any help!
This user also contacted our team through another channel. Sharing the solution below for future users, in reference to this schema. In short, the populate process is reserved for unique upstream primary keys.
Since the ElectrodeGroup's only upstream table dependency is Session, the make method will only be called if there are no electrode groups for that session. This is because from the perspective of DataJoint, the only 'guaranteed' knowledge about what should exist for this table is defined solely by the presence/absence of related upstream records. Since the 'new' primary 'electrode_group_name' attribute is defined by the ElectrodeGroup table itself, DataJoint doesn't know how many copies will be created by make, and so simply invokes make 1 time per Session, expecting the single make invocation to fully define all possible electrode_group_name values the table will use. If there is one value for that session, no work needs to be done, so no make() invocation occurs.
There are a couple possible solutions:
Model the electrode group explicitly, with a table defines the existence of an electrode group (e.g., ElectrodeGroupConfiguration). This ElectrodeGroup would then inherit primary keys from both Session and ElectrodeGroupConfiguration. The ElectrodeGroup make function would be adjusted to load that unique keys across upstream tables.
Adjust the make function to handle the partial insert/update case, and call the make function directly with the desired primary key when these kinds of 'abnormal' updates need to occur.
Method #1 is 'cleanest' w/r/t to the DataJoint data model (explicitly modeled data dependencies using make/populate), whereas #2 is slightly 'escaping' the DataJoint data model in a controlled way to achieve a desired schema/data result.

Generate notification in slack channel whenever there is a change in a table or schema in RDS

I have a PostgreSQL DB instance in my RDS. There are frequent changes happening in this instance. Like new tables/schemas gets added or a table name is changed or a new column is added to an existing table or a column name is changed of an existing table. I want to be notified in slack whenever there is such a change/update.
I did a lot of RnD but couldn't find any. We can create SNS topics but they only capture DB instance level events like an instance is down, or an instance is created or deleted and stuff like that but nothing on schema or table level.
I have one approach in mind but that is not a very clean approach. My idea is to write a trigger on the information_schema.columns that contains the list of the schema's, tables, their column names and types. And whenever there is a change in any of the records in this table, the trigger will be triggered. But i can't figure out how to get the trigger to notify in my slack channel.

Execute Data Factory pipeline based on event in table

I have a requirement to trigger Azure Data Factory pipeline whenever there is a new record in a table. Is there any way to achieve this.
No, Event-based trigger only support Azure Blob Storage by now.
You can vote here to progress this feature in Azure Data Factory.
Currently it is not possible.
There is a similar discussion going on in the below thread :
https://learn.microsoft.com/en-us/answers/questions/197465/how-to-trigger-an-adf-based-on-any-data-changes-wi.html
While this is not currently supported, here's an idea on how to fake it. (Just sharing an idea, not sure whether this is feasible.)
Create a Trigger on INSERT
Trigger executes a Stored Procedure
Stored Procedure uses Polybase to create text file in Blob Storage with the relevant information (like new row ID).
Create a BlobCreated event trigger over that Storage location in ADF or Logic App.
Doing this should end up with an Event Trigger that fires whenever a new row is inserted.
We can use logic app to trigger a pipeline based on a query that finds data inserted in past x minutes/seconds
There's no direct way to do this in ADF (yet). As others have pointed out, you can vote for the feature to be added and Microsoft might add it. But there's still a way to do this:
You can set up a Logic App. There's a really nice video tutorial on this here: https://www.youtube.com/watch?v=z0sMIN4xMSY
You can set up a logic app to use a SQL database as the trigger, and then you can decide if you want to have it trigger based on something new created in a specific table or something modified in a specific table. You can then action the logic app to trigger the run of an Azure data factory pipeline. You can set the Logic App to check the table as often as you like, for example, every minute, 3 minutes or longer etc.
If you want an ADF pipeline to run every time a certain table is modified or row inserted, but the table in question is very large, you can reduce compute by creating a track changes table. Basically have another, very small table that logs changes and then use this table for the one the logic app monitors. This would make it faster and more efficient in such a case.
Give the video a watch.

Detect when a record is being cloned in trigger

Is there a way to detect that a record being inserted is the result of a clone operation in a trigger?
As part of a managed package, I'd like to clear out some of the custom fields when Opportunity and OpportunityLineItem records are cloned.
Or is a trigger not the correct place to prevent certain fields being cloned?
I had considered creating dedicated code to invoke sObject.Clone() and excluding the fields that aren't required. This doesn't seem like an ideal solution for a managed package as it would also exclude any other custom fields on Opportunity.
In the Winter '16 release, Apex has two new methods that let you detect if a record is being cloned and from what source record id. You can use this in your triggers.
isClone() - Returns true if an entity is cloned from something, even if the entity hasn’t been saved.
getCloneSourceId() - Returns the ID of the entity from which an object was cloned.
https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_methods_system_sobject.htm#apex_System_SObject_getCloneSourceId
https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_methods_system_sobject.htm#apex_System_SObject_getCloneSourceId
One approach, albeit kind of kludgy, would be to create a new field, say original_id__c, which gets populated by a workflow (or trigger, depending on your preference for the order of execution) when blank with the salesforce id of the record. For new records this field will match the standard salesforce id, for cloned records they won't. There are a number of variations on when and how and what to populate the field with, but the key is to give yourself your own hook to differentiate new and cloned records.
If you're only looking to control the experience for the end user (as opposed to a developer extending your managed package) you can override the standard clone button with a custom page that clears the values for a subset of fields using url hacking. There are some caveats, namely that the field is editable and visible on the page layout for the user who clicked the clone button. As of this writing I don't believe you can package standard button overrides, but the list of what's possible changes with ever release.
You cannot detect clone operation inside the trigger. It is treated as "Insert" operation.
You can still use dedicated code to invoke sObject.Clone() and exclude the fields that aren't required. You can ensure that you include all fields by using the sObject describe information to get hold of all fields for that object, and then exclude the fields that are not required.
Hope this makes sense!
Anup

Salesforce.com: UNABLE_TO_LOCK_ROW, unable to obtain exclusive access to this record

In our production org, we have a system of uploading sales data into Salesforce using command line data loader. This data is loaded into a temporary object Temp. We have created a formula field (which combines three fields) to form a unique key. The purpose of the object is to reduce user efforts for creating the key manually.
There is an after insert trigger on Temp which calls an asynchronous method which upserts the data to another object SalesData using the key. The insert/update trigger on SalesData checks the various fields and creates/updates the records in another object SalesRecords. After the insertion/updation is complete, all the records in temp object Temp are deleted. The SalesRecords object does not have any trigger on it and is a child of another object Sales. The Sales object has some rollup fields which are summing up fields from SalesRecords object.
Lately, we are getting the below error for some of the records which are updated.
UNABLE_TO_LOCK_ROW, unable to obtain exclusive access to this record
Please provide some pointers to resolve the issue
this could either be caused by conflicting DML operations in the various trigger execution or some recursive trigger execution. i would assume that the async executions cause multiple subsequent updates on the same records, probably on the SalesRecords object. I would recommend to try to simplify the process to avoid too many related trigger executions.
I'm a little surprised you were able to get this to work in the first place. After triggers should be used with caution and only when before triggers can't be. One reason for this is that you don't need to perform additional DML to make changes to records, since in before triggers you simply change the values and the insert/update commit happens automatically. But recursive trigger firings is the main problem with after triggers.
One quick way to avoid trigger re-entry is to use a public static Boolean in a class that states whether you're already in this trigger from the same thread of execution.
Something like:
public static Boolean isExecuting = false;
Once set to true, any trigger code that is a re-fire can be avoided with:
if(Class.isExecuting == false)
{
Class.isExecuting = true;
// Perform trigger logic
// ...
}
Additionally, since the order of trigger execution cannot be determined up front, you might be seeing an issue with deletions or other data changes that depend on other parts of your flow to finish first.
Also, without knowing the details of your custom unique 3-part key, I'd wonder if there's a problem there too such as whether it's truly unique or not. Case insensitivity is a common mistake and it's the reason there are 15 AND 18 character Ids in Salesforce. For example, when people export to Excel (a case-insensitive environment) and do VLOOKUPs, they would occasionally find the wrong record. The 3-digit calculated suffix was added to disambiguate for case-insensitive environments.
Googling for this same error lead me to this post:
http://boards.developerforce.com/t5/General-Development/Unable-to-obtain-exclusive-access-to-this-record/td-p/345319
Which points out some common causes for this to happen:
Sharing Rules are being calculated.
A picklist value has been replaced and replacement is in progress.
A custom index creation/removal is in progress.
Most unlikely one - someone else is already editing the same record that you are trying to access at the same time.
Posting here in case somebody else needs it.
I got this error multiple times today. Turned out one of our vendors was updating their installed package during that time in the same org. All kinds of things were going wrong also - some object validation exceptions were being thrown on DMLs, without any error message content.
Resolution
The error is shown when a field update such as a roll-up summary field is being attempted on a parent object that already had a field update to cause the roll-up summary field to calculate. This could also occur if a trigger or another apex job running on the master object and it also attempting to do an update.
You can either reduce the batch size and try again or create separate smaller files to be imported if this issue occurs.