The correct HTTP method for resetting data - rest

I want to "reset" certain data in my database tables using a RESTful method but I'm not sure which one I should be using. UPDATE because I'm deleting records and updating the record where the ID is referenced (but never removing the record where ID lives itself), or DELETE because my main action is to delete associated records and updating is tracking those changes?
I suppose this action can be ran multiple times but the result will be the same, however the ID will always be found when "resetting".

I think you want the DELETE method

Related

Http "Put" method to update a record and insert new one simultaneously

I have a use case where i need to update an existing record in DB, stamp it as expired, and create a new record with a new expiry date.
For updating the record, I intend to use a "PUT" call.
However, for creating a new record, do i need to call a "POST" endpoint again from the UI?
Or can i simply add another repository.save(obj) method in the "PUT" method implementation ?
Thanks..
Edit 1 : The new record is a copy of the expired record, but with a new expiry date.
The main difference between PUT and POST - one is idempotent, the other one isn't. Meaning that you can repeat same PUT many times and this won't be adding more and more entities/elements.
Usually creating a new resource is a POST operation, because in most situations you send an entity w/o ID and server assigns one. So if you repeat the same operation multiple times - you get more entities. This is not idempotent and requires POST.
So in the perfect world you'd send 2 separate requests: PUT - for expiration, POST - for the new entity. In the real world you may encounter additional constraints:
Both operations may need to be run in a single transaction. Otherwise you may remove old entity w/o creating a new one.
Separate requests may lead to performance issues or complications on the client side (especially if it's an async environment like JS).
So you may have to create an ugly API that would accept both entities in 1 request. But such request should be POST because it's not idempotent.
Though it sounds like the new entity in your situation is just an update to the old entity. This is usually solved by PUTing the same entity with new fields, without sending an explicit removal. In such case server will have to recognize that old entity needs to be marked as expired. And you wouldn't assign a new ID to the updated entity - you'd assign a new version (extra column in DB).

Tracking who done what; mongo Atomicity when deleting by filter

I've been implementing an auditing system for mongo that tracks call and user information for each mongo transaction.
IE user bill
made a call to x endpoint
at y time
and changed z field from foo to bar
inserts and updates are easy because I tie a stored call info object to any objects updated in that call. (through a set property or updating the property directly on a replace or upsert call.)
all that works great.
Deletes are a hairy beast though.
when I delete by id I can easily track that information. BUT when I delete by filter
IE delete from users where username like bill.
mongo doesn't return the deleted ids back in the response. if I query to get those objects before I delete them who knows what could happen between the time I get those objects and when I actually delete them.
(Knock Knock, race condition. who's there?)
any ideas on how to keep the atomicity of that delete and have a reliable way to tie that delete call to the delete transaction?

Update associated field in Waterline/Sails without affecting other fields

I notice that multiple requests to a record causes writes to be possibly overwritten. I am using Mongo btw.
I have a schema like:
Trip { id, status, tagged_friends }
where tagged_friends is an association to Users collection
When I make 2 calls to update trips in close succession (in this case I am making 2 API calls from client - actually automated tests), its possible for them to interfere. Since they all call trip.save().
Update 1: update the tagged_friends association
Update 2: update the status field
So I am thinking these 2 updates should only save the "dirty" fields. I think I can do that with Trips.update() rather than trip.save()? But problem is I cannot use update to update an association? That does not appear to work?
Or perhaps there's a better way to do this?

Obtain ServiceDeploymentId in TrackingParticipant

In WF4, I've created a descendant of TrackingParticipant. In the Track method, record.InstanceId gives me the GUID of the workflow instance.
I'm using the SqlWorkflowInstanceStore for persistence. By default records are automatically deleted from the InstancesTable when the workflow completes. I want to keep it that way to keep the transaction database small.
This creates a problem for reporting, though. My TrackingParticipant will log the instance ID to a reporting table (along with other tracking information), but I'll want to join to the ServiceDeploymentsTable. If the workflow is complete, that GUID won't be in the InstancesTable, so I won't be able to look up the ServiceDeploymentId.
How can I obtain the ServiceDeploymentId in the TrackingParticipant? Alternately, how can I obtain it in the workflow to add it to a CustomTrackingRecord?
You can't get the ServiceDeploymentId in the TrackingParticipant. Basically the ServiceDeploymentId is an internal detail of the SqlWorkflowInstanceStore.
I would either set the SqlWorkflowInstanceStore to not delete the worklow instance upon completion and do so myself at some later point in time after saving the ServiceDeploymentId with the InstanceId.
An alternative is to use auto cleanup with the SqlWorkflowInstanceStore and retreive the ServiceDeploymentId when the first tracking record is generated. At that point the workflow is not complete so the original instance record is still there.

Can Primary-Keys be re-used once deleted?

0x80040237 Cannot insert duplicate key.
I'm trying to write an import routine for MSCRM4.0 through the CrmService.
This has been successful up until this point. Initially I was just letting CRM generate the primary keys of the records. But my client wanted the ability to set the key of a our custom entity to predefined values. Potentially this enables us to know what data was created by our installer, and what data was created post-install.
I tested to ensure that the Guids can be set when calling the CrmService.Update() method and the results indicated that records were created with our desired values. I ran my import and everything seemed successful. In modifying my validation code of the import files, I deleted the data (through the crm browser interface) and tried to re-import. Unfortunately now it throws and a duplicate key error.
Why is this error being thrown? Does the Crm interface delete the record, or does it still exist but hidden from user's eyes? Is there a way to ensure that a deleted record is permanently deleted and the Guid becomes free? In a live environment, these Guids would never have existed, but during my development I need these imports to be successful.
By the way, considering I'm having this issue, does this imply that statically setting Guids is not a recommended practice?
As far I can tell entities are soft-deleted so it would not be possible to reuse that Guid unless you (or the deletion service) deleted the entity out of the database.
For example in the LeadBase table you will find a field called DeletionStateCode, a value of 0 implies the record has not been deleted.
A value of 2 marks the record for deletion. There's a deletion service that runs every 2(?) hours to physically delete those records from the table.
I think Zahir is right, try running the deletion service and try again. There's some info here: http://blogs.msdn.com/crm/archive/2006/10/24/purging-old-instances-of-workflow-in-microsoft-crm.aspx
Zahir is correct.
After you import and delete the records, you can kick off the deletion service at a time you choose with this tool. That will make it easier to test imports and reimports.