With Clio API, when a request is placed with created_since and updated_since, deleted information is not provided by Clio.
I have all the tasks saved in my DB from Clio.
Now I go to Clio and delete 2 Tasks
So, when I request for an update on the tasks using created_since and updated_since, I do not get the tasks that are deleted.
How do I get the deleted details from Clio API
I have the same need.
I am keeping a remote copy of the Clio database in constant sync in order to produce custom reports for clients.
When Contacts/Matters/Activities/Bills/etc are deleted, I would like to see that through an API call but at this point have to scan through my entire database and compare everything I received and manually figure out what to delete.
It is a time-consuming problem fraught with difficult problems.
+1
Related
So there is List Release Deployments API. It allows to get all the deployments started after the given timestamp. However, how to get those completed after it?
Rationale
Suppose I want to write a scheduled job that fetches information about completed deployments and pushes it to an Azure Application Insights bucket (for example for DORA metrics). I do not understand how it can be done easily without the ability to filter by the completion date. A relatively hard way would be to fetch by the started date, notice all the deployments inProgress or notDeployed and record them in a dedicated database. Then on the next polling cycle fetch new deployments and all those still recorded in the aforementioned database. This is much more complicated than it could have been with the ability to filter by the completion date.
Am I missing anything here? Maybe there is a simpler way (possibly using another API) that I just do not see?
EDIT 1
By the way, the hard way is even harder than I thought, since apparently there is no way to fetch release deployments by their Id.
EDIT 2
The plot thickens. If a stage has post deployment approvals, then the stage is reported as inProgress, even though de facto it has already been deployed. So an API that just filters by completion date would omit such deployments. It has to be with an option to include such deployments.
I'm trying to log completion of each stage of multi stage yaml pipelines with some custom details.
How can i add custom details to https://dev.azure.com//_settings/audit logs.
Is there a way to persist this information in sqldb or any other persistant storage option.
How can i subscribe to the these log events.
How can i add custom details to
https://dev.azure.com//_settings/audit logs.
I'm afraid this does not available for you to achieve.
Because the sentence format of details is defined and fixed by our backend class. Once the corresponding action occurred, beside the action class, the event method will also be called to generate and record the log into audit page. These are all finished by backend. And we haven't expose this operation permission to users until now.
But based on my own, this is a good idea that we may consider to expand. Because Customized details can make the details more readable for the company. You can raise your idea here, vote and comment it. Our corresponding Product Group review these suggestion regularly, and consider to take into our develop roadmap depend on its priority(votes).
How can i subscribe to the these log events.
There has one important thing that I need let you know, the audit log only keep for 90 days. And it will be cleared after 90 days, including our backend database. The nutshell is, if you want the audit logs which more than 90 days, we also have no idea to help on restore that.
So I suggest you can configure one scheduled pipeline with Powershell task.
In this powershell task, run this api to get and then store it with any file type you want, e.g .csv, .json and etc.
For the schedule value, you can set it as any time period you like. Just less than 90 days, so that it do not make you lose any audit event log.
Is there a way to persist this information in sqldb or any other
persistant storage option.
If you can use a different database, I'd better suggest you consider to using a document storage solution such as CouchDB, DynamoDB or MongoDB.
Depend on your actual used, you can make use of Command line task with self-agent, to execute corresponding storing commands.
For sample, what I used is MongoDB and I can run below commands to store the JSON file that api generated previously:
C:\>mongodb\bin\mongoimport --jsonArray -d mer -c docs --file audit20191231.json
The documentation of this API is a little hard to understand in functional terms.
https://westus.dev.cognitive.microsoft.com/docs/services/Recommendations.V4.0/operations/577d91f77270320f24da2592
Upload a usage event to a model. If buildId is set to "-1", the event
is ingested against the Active Build of the model. Set the buildId is
set to null or 0, the events are ingested against the Active build, if
Active build doesn't exist, the events are not associated with any
build.
"is ingested against the Active Build of the model"
What does this mean?
What happens when you associate events to a build?
I have been sending events using the Upload usage event API, but I don't see any changes on the active build on the Data Statistics tab.
Any help to understand this would be appreciated.
I'm building a batch process to send new usage events, and right now my approach is this:
Upload New Usage File
Delete Old Usage file
Create New Build
Change Active Build
Delete Old Build
I was hoping that the other API just to send users events would work, but since I can't make it to work as expected, I changed to this approach.
Is this a good approach or should be doing this in a different way?
The upload usage file is a better approach than the upload usage event.
Reasons:
You get to send the events as one file thus decreasing your api usage count
You can always review and correct your usage files in case something is wrong. I do not see an api command to view/edit/delete uploaded events
You can reuse your usage files to recreate the model in case of an issue with the current one
Here is my own process during midnight:
Upload new usage file based on today's events
Create new build
Update my system to use new build number (since I have different build types in the same model)
Why this process?
Apparently, we will need to create a new build anyway for new usage data to be considered.
Per another post (answered by an authority on the subject)
After updloading a usage event you need to create a new build in
that model for the usage event to be considered as part of the
recommendations request.
You can check the whole post here
Also, as mentioned in the linked post, a few usage events may not be enough to change the recommendations if done real time / frequently thus wasting effort. So a batch process, using usage files and done once per day is the more pragmatic approach.
I've been wrestling with the problem of representing a checkout / checkin system via a REST api.
To give you a small example our system handles nodes and we need a way for one user to be able to lock this node and then make changes to it and then commit it.
So I was thinking something like
/nodeId (this is the base location for a node and provides an up to date checked in revision read only view of the node)
/nodeId/edited (posting to here creates an edited version of the doc this is checkout, gettting gets the edited version, and putting makes a change)
Now I want to represent checkin, im tempted to say that POSTing to /nodeId/edited again will commit the edited document but then we are, giving post two distinct meanings. I could create another checkin endpoint but that seems messy? Another alternative is having a POST to /nodeId which creates the edited version but again this seems confused.
To lock/checkout a Resource, POST to /nodeId with the partial document {"locked":"true"}. The server must handle Resource state and check if the Resource can be locked etc. The server could answer 204 No Content if the lock suceeded and 409 Conflict if the lock was not possible.
To unlock/checkin the locked Resource, POST to /nodeId with the partial document {"locked":"false", "someKey":"someValue", ...}. The server must handle Resource state, check if the Resource is locked, and update it usinng the POSTed data. Again, the server could answer 204 No Content if the unlock suceeded and 409 Conflict if not.
Edit: added possible HTTP status codes.
Edit 2: There is no "endpoint" in REST like in SOAP. You manipulate Resources, you don't call methods.
I'm setting up a basic sync service for an iPad application I'm developing. The goal is to have data consistent throughout several instances of the iPad app, as well as having a read-only version of the data on the web, hence rolling a custom solution.
The current flow is this:
Each entity has a 'created', 'modified' and 'UUID' field which are automatically updated by Core Data
On sync, each entity with a created or modified date after the last sync date is serialised into JSON and sent to the server
The server persists any changes to a MySQL database using the client-generated UUIDs as PKs (if there's a conflict, it just uses the most recently modified entity as the 'true' version, nothing fancy there) and sends back any updated entities to the client
The client then merges these changes back into its Core Data DB
This all seems to be working fine. My problem is how to track deleted objects using this method? I'm guessing I can add a 'deleted' flag to each entity and set this whenever a client deletes something, I can then push that change to the server with the rest of the sync data. Once the sync is complete then the client can actually delete these entities. My questions are:
Can I override Core Data's delete methods to automatically set this flag?
Will this require keeping all deleted entities indefinitely on the server? We'll have no way of knowing when every client has synced and actually deleted each entity (I'm not currently tracking client instances)
Is there a better way of doing this?
How about you keep a delta history table with UUID and created/updated/deleted field, maybe with a revision number for each update? So you keep a small check list of changes since your last successful sync.
That way, if you delete an object you could add an entry in the delta history table with the deleted UUID and mark it deleted. Same with created and updated objects, you only need to check the delta table to see what items you the server needs to delete, update, create, etc. You could even store every revision on the server to support rolling back to a previous version in the future if you feel like it.
I think a revision number is better than relying on client's clock that could potentially be changed manually.
You could use NSManagedObjectContext's insertedObjects, updatedObjects, deletedObjects methods to create the delta objects before every save procedure :)
My 2 cents
Whether or not you have to keep deleted objects on the server or not totally depends on your needs. You will need a deleted flag locally to mark as deleted for the sync, maybe also on the server depending on your desire to roll back.
I have taken care of this problem a few ways before. Here is one possibility:
When a client deletes something, just mark it to be deleted locally and delete from the server during the sync (at which point you can purge from core data). When other clients request to access that data, send back an HTTP 404 because you dont have the object any more. At that point the client can delete the entity locally. Now if a client requests a list of things and this object has been deleted, it will just be missing from the list of things he gets back so you can detect that and delete it. I do that in a client by creating an array of object IDs when I get a response from the server and deleting any local objects that don't have those IDs.
We have a deleted field on the server, but just to have the ability to roll back in case something is deleted by accident.
Of course you could return deleted objects to the client so they know to delete but if you don't want to keep a copy on the server, you would have to make some assumption that the clients would all update within a time frame. Then you could garbage collect after that time frame has expired.
I don't really like that solution though. If your data is too heavy to ask for all the objects for a complete sync, you could use your current merge strategy for creating and updating, and then run a separate call to check for deleted items. That call could simply ask for all IDs that the client should have on the device. It could delete the ones that don't exist. OR it could send all IDs on the client and get back a list of IDs to delete.
I think you have to provide more details about the nature of the data if you want a more opinionated suggestion.
Regarding your second question: You can design this so that the server doesn't have to keep deleted records around, if you want to. Let each app know if a given piece of data (based on its UUID) is stored on the server (e.g. add an existsOnServer property or similar). This starts out false when a new item is created in the app, but is set to true once it has been synced to the server for the first time. That way, if the app tries to sync later, but the UUID is not found, you can differentiate the two cases: If existsOnServer is false, then then this item is newly created and should be synced to the server, but if it is true then it can be taken to mean that it was already on the sever before, but has now been deleted, so you can delete it in the app too.
I'd probably argue against this approach, since it seems more error prone to me (I imagine a database or connection error incorrectly being interpreted as a deletion) and keeping records around on your server would usually not be a big deal, but it is possible. The "delta-approach" suggested by dzeikei could be used at the same time, so an update to a record that does not exist on the server signifies that it was deleted, while an insert does not.
You may take a look at Cross-Platform Data Synchronization by Dan Grover if you haven't. It's a very well written paper regarding synchronization and iOS.
About your questions:
You can avoid deleting a file in Core Data and set a 'deleted flag': just update the file instead of deleting it. You could make your own 'deleting' method that actually would call and update the flag on the record.
Keep always a last_sync and a last_updated for each record on the server and on each client. This way you'll always know when someone did change something anywhere and if that change was synced or not against the 'truth database'.
Keeping track of deleted files is a hard thing to do, I guess the best way to do it is keeping track of the history of syncs for each table, but is a difficult task. The easiest way, using this 'truth-database' kind of configuration is to flag the files, so that way yes, you should keep the data on the server as well as on the client.
during synchronization of data between tow table some records or deleted when the table rows are same. and when the rows are different the correctly synchronized, i used this Code click here on image