Trigger update does not work when creating a new request - triggers

I am new to DB Oracle, When I create a new request in Clarity (that is a project & portfolio management application) or when I change the status of a request, I would like to update the field status to the new value of mb_status_idea.
The following query works well in case of Update, but if I create a new request, it does not update the status. (so status is not equal to status MB).
IF ( :old.mb_status_idea != :new.mb_status_idea)
THEN update inv_investments a
set a.status = stat
where a.id=:new.id ;
END IF;
I think the problem is that when creating a new request, since for insert trigger OLD contains NO VALUE, so the condition would be false and it doeas not update the status.
Note: The field status is in the table INV_INVETMENTS , (stat := :new.mb_status_idea) and database column for status MB is mb_status_idea
I also added this condition --> or (:old.mb_status_idea is null), but again when I create a new request, the value of "Status" and "status MB" are different (status is not updated).
I do appreciate if someone could help to overcome this problem.
All ideas are highly appreciated,
Mona

With Clarity it is recommended to not use triggers for a couple of reasons... jobs and processes may sometimes change the values of some fields at other times than when edits happen through the application. You can't control these. Triggers can't be used if you use CA hosting services. Triggers will have to be removed for upgrades because the upgrade process breaks them.
For this type of action I would recommend using the process engine. You can setup a process to run any time the field is updated. The update could be performed by a custom script or a system action. The system action is fairly straight forward to configure. If you use a custom script there are examples in the admin bookshelf documentation. You would write a SQL update statement and put it in a GEL script.

Related

Why Spring Data doesn't support returning entity for modifying queries?

When implementing a system which creates tasks that need to be resolved by some workers, my idea would be to create a table which would have some task definition along with a status, e.g. for document review we'd have something like reviewId, documentId, reviewerId, reviewTime.
When documents are uploaded to the system we'd just store the documentId along with a generated reviewId and leave the reviewerId and reviewTime empty. When next reviewer comes along and starts the review we'd just set his id and current time to mark the job as "in progress" (I deliberately skip the case where the reviewer takes a long time, or dies during the review).
When implementing such a use case in e.g. PostgreSQL we could use the UPDATE review SET reviewerId = :reviewerId, reviewTime: reviewTime WHERE reviewId = (SELECT reviewId from review WHERE reviewId is null AND reviewTime is null FOR UPDATE SKIP LOCKED LIMIT 1) RETURNING reviewId, documentId, reviewerId, reviewTime (so basically update the first non-taken row, using SKIP LOCKED to skip any already in-processing rows).
But when moving from native solution to JDBC and beyond, I'm having troubles implementing this:
Spring Data JPA and Spring Data JDBC don't allow the #Modifying query to return anything else than void/boolean/int and force us to perform 2 queries in a single transaction - one for the first pending row, and second one with the update
one alternative would be to use a stored procedure but I really hate the idea of storing such logic so away from the code
other alternative would be to use a persistent queue and skip the database all along but this introduced additional infrastructure components that need to be maintained and learned. Any suggestions are welcome though.
Am I missing something? Is it possible to have it all or do we have to settle for multiple queries or stored procedures?
Why Spring Data doesn't support returning entity for modifying queries?
Because it seems like a rather special thing to do and Spring Data JDBC tries to focus on the essential stuff.
Is it possible to have it all or do we have to settle for multiple queries or stored procedures?
It is certainly possible to do this.
You can implement a custom method using an injected JdbcTemplate.

How to create a Logic App Custom Connector polling trigger?

I've been able to create a Logic App Custom Connector with a webhook trigger by following the docs, however I can't find any documentation on creating a polling trigger. I was only able to find Jeff Hollan's trigger examples, but the polling trigger doesn't seem compatible with the custom connector.
I tried setting up a polling trigger by performing the following steps:
Create an Azure Function with a GET operation expecting a date time query parameter
Have the function return a set of entities that have changed since the last poll
Configure the custom connector to call the Azure Function with the date time query parameter
Configure the response body of the custom connector
Try different things in the 'Trigger configuration' section, but this is most confusing to me.
Whatever I tried, the trigger always fails with a 404 in the trigger outputs, similar to what I initially had with the webhook trigger type.
There are a few things that confuse me:
1. Path of trigger query seems screwed up
It looks like the custom connector UI screws up the path to the trigger. I noticed this when I downloaded the OpenAPI file. The path to my trigger API should be /api/trigger/tasks/completed, but in the OpenAPI file it read /trigger/api/trigger/tasks/completed. It appears the custom connector adds /trigger in front of the path. I sometimes noticed it doing this multiple times, giving me something similar to /trigger/trigger/trigger/api/trigger/tasks/completed. I fixed this in the OpenAPI file and re-imported it into the custom connector.
2. Trigger Configuration section
I don't understand what to do in the Trigger Configuration section of a polling trigger.
I assume the query parameter to monitor state change is some parameter I define myself, e.g. a timestamp, to determine what entities to return.
As the 'select value to pass to selected query param' I would expect I could pick a timestamp from the trigger response. It looks like I can only pick values from a collection, not scalar values from the response as I would expect. How does that work?
Is 'trigger hint' just some information or does it actually control something?

Update associated field in Waterline/Sails without affecting other fields

I notice that multiple requests to a record causes writes to be possibly overwritten. I am using Mongo btw.
I have a schema like:
Trip { id, status, tagged_friends }
where tagged_friends is an association to Users collection
When I make 2 calls to update trips in close succession (in this case I am making 2 API calls from client - actually automated tests), its possible for them to interfere. Since they all call trip.save().
Update 1: update the tagged_friends association
Update 2: update the status field
So I am thinking these 2 updates should only save the "dirty" fields. I think I can do that with Trips.update() rather than trip.save()? But problem is I cannot use update to update an association? That does not appear to work?
Or perhaps there's a better way to do this?

Multiple / Rapid ajax requests and concurrency issues with Entity Framework

I have an asp.net MVC4 application that I am using Unity as my IoC. The constructor for my controller takes in a Repository and that repository takes in a UnitOfWork (DBContext). Everything seems to work fine until multiple ajax requests from the same session happen too fast. I get the Store update, insert, or delete statement affected an unexpected number of rows (0) error due to a concurrency issue. This is what the method looks like called from the ajax request:
public void CaptureData(string apiKey, Guid sessionKey, FormElement formElement)
{
var trackingData = _trackingService.FindById(sessionKey);
if(trackingData != null)
{
formItem = trackingData.FormElements
.Where(f => f.Name == formElement.Name)
.FirstOrDefault();
if(formItem != null)
{
formItem.Value = formElement.Value;
_formElementRepository.Update(formItem);
}
}
}
This only happens when the ajax requests happens rapidly, meaning fast. When the requests happen at a normal speed everything seems fine. It is like the app needs time to catch up. Not sure how I need to handle the concurrency check in my repository so I don't miss an update. Also, I have tried setting the "MultipleActiveResultSets" to true and that didn't help.
As you mentioned in the comment you are using a row version column. The point of this column is to prevent concurrent overwrites of the same row. You have two operations:
Read record - reads record and current row version
Update record - update record with specified key and row version. The row version is updated automatically
Now if those operations are executed by concurrent request you may receive this:
Request A: Read record
Request B: Read record
Request A: Write record - changes row version!
Request B: Write record - fires exception because record with row version retrieved during Read record doesn't exist
The exception is fired to tell you that you are trying to update obsolete data because there is already a new version of the updated record. Normally you need to refresh data (by reloading current record from the database) and try to save them again. In highly concurrent scenario this handling may repeat many times because simply your database is designed to prevent this. Your options are:
Remove row version and let requests overwrite the value as they wish. If you really need concurrent request processing and you are happy to have "some" value, this may be the way to go.
Not allow concurrent requests. If you need to process all updates you most probably also need their real order. In such case your application should not allow concurrent requests.
Use SQL / stored procedure instead. By using table hints you will be able to lock record during Read operation and no other request will be able to read that record before the first one save changes and commits or rollbacks transaction.

Can Primary-Keys be re-used once deleted?

0x80040237 Cannot insert duplicate key.
I'm trying to write an import routine for MSCRM4.0 through the CrmService.
This has been successful up until this point. Initially I was just letting CRM generate the primary keys of the records. But my client wanted the ability to set the key of a our custom entity to predefined values. Potentially this enables us to know what data was created by our installer, and what data was created post-install.
I tested to ensure that the Guids can be set when calling the CrmService.Update() method and the results indicated that records were created with our desired values. I ran my import and everything seemed successful. In modifying my validation code of the import files, I deleted the data (through the crm browser interface) and tried to re-import. Unfortunately now it throws and a duplicate key error.
Why is this error being thrown? Does the Crm interface delete the record, or does it still exist but hidden from user's eyes? Is there a way to ensure that a deleted record is permanently deleted and the Guid becomes free? In a live environment, these Guids would never have existed, but during my development I need these imports to be successful.
By the way, considering I'm having this issue, does this imply that statically setting Guids is not a recommended practice?
As far I can tell entities are soft-deleted so it would not be possible to reuse that Guid unless you (or the deletion service) deleted the entity out of the database.
For example in the LeadBase table you will find a field called DeletionStateCode, a value of 0 implies the record has not been deleted.
A value of 2 marks the record for deletion. There's a deletion service that runs every 2(?) hours to physically delete those records from the table.
I think Zahir is right, try running the deletion service and try again. There's some info here: http://blogs.msdn.com/crm/archive/2006/10/24/purging-old-instances-of-workflow-in-microsoft-crm.aspx
Zahir is correct.
After you import and delete the records, you can kick off the deletion service at a time you choose with this tool. That will make it easier to test imports and reimports.