Softlayer deletion of manual snapshot - rest

Good Morning,
Snapshot creation successful via cURL REST commands These take
anywhere between 1-5 minutes before they are visable via GUI or
getSnapshotsForVolume.
I do not see a reference to delete any snapshot (manual or automated) via the service, deletion of scheduled snapshots appears to be utilization based upon value defined for the schedule.
I know this function should exist, as option is available via the GUI.
http://sldn.softlayer.com/reference/services/SoftLayer_Network_Storage_Iscsi
Thanks

yep it exists is the deleteObject method
http://sldn.softlayer.com/reference/services/softlayer_network_storage_iscsi/deleteobject
when you create a new manual snapshot it returns an softlayer_network_storage_iscsi object see:
http://sldn.softlayer.com/reference/services/softlayer_network_storage_iscsi/createsnapshot
you just need to make to sure to use the ID of that snapshot in the deleteObject method

Related

Ansible: Check (GET) before Applying (POST/PUT) if applying is idempotent

I am writing roles in ansible, which use the ansible.builtin.uri method to do my bidding against the api of the service.
As I don't want to POST/PUT every time I run the Playbook, I check if the item I want to create already exists.
Does this make sense? In the end I introduce an extra step to GET the status, to skip the POST/PUT, where the POST/PUT itself would simply set what I want in the end.
For example, I wrote an Ansible Role which gives a User a role in Nexus.
Every time I run the Role, it first checks if the User already has the role and if not, it provides it.
If I don't check before and the User would already have the Role, it would simply apply it again.
But as I would like to know exactly whats going to happen, I believe it is better to explicitly check before applying.
What is the best practice for my scenario and are there any reasons against checking before or directly applying the changes?

Triggering Kusto commands using 'ADX Command' activity in ADFv2 vs calling WebAPI on it

In ADFv2 (Azure Data Factory V2) if we need to trigger a command on an ADX (Azure Data Explorer) cluster , we have two choices:-
Use 'Azure Data Explorer Commmand' activity
Use POST method provided in the 'WebActivity' activity
Having figured out that both the methods work I would say from development/maintenance point of view the first method sounds more slick and systematic especially because it is out of the box feature to support Kusto in ADFv2. Is there any scenario where the Web Activity method would be more preferable or more performant? I am trying to figure out if it's alright to simply use the ADX Command activity all the time to run any Kusto command from ADFv2 instead of ever using the Web activity,
It is indeed recommended to use the "Azure Data Explorer Command" activity:
That activity is more comfortable, as you don't have to construct by yourself a the HTTP request.
That command takes care of few things for you, such as:
In case you are running an async command, it will poll the Operations table until your async command is completed.
Logging.
Error handling.
In addition, you should take into consideration that the result format will be different between both cases, and that each activity has its own limits in terms of response size and timeout.

Getting "java.lang.IllegalStateException:Was expecting to find transaction set on current strand" while running nodes from terminal

There is a question on stackoverflow, but in my case I run the nodes from console: deployNodes, runnodes. So there is no StartedMockNode class to use transaction{} function
What’s wrong with it and how can I fix it?
Here is the method throwing the exception
serviceHub.withEntityManager {
persist(callbackData)
}
Debugged this through with Hayk on Slack.
DB transactions are handled by corda. These transactions are only generated at two points. During node startup so corda services can invoke database queries and inserts and inside of flows.
In this scenario, the database was being accessed from outside of node startup and a not during a flow's invocation.
To circumvent this, a new flow needs to be created that handles the db operations that were causing the error. The db operation can still be kept inside of a corda service but it must be called from a flow.
This flow does not need a responder. It should be annotated with #StartableByService and should not need #InitiatingFlow (need to double check that one). The contents of call simply call the db operation and return the result back to the caller.
TLDR - all db operations must be called from inside a flow or during node startup.

Firestore trigger temporal information

Hi so i understand firestore write triggers run out of order with respect to time. Is is possible to get timestamp information on when a write occured within the trigger functions execution context?
If you're using the Firebase CLI to deploy, every background function is delivered an EventContext object as its second parameter. You can use its timestamp property. Or, you can have the client write it into the document.
I assume something similar is available for the context object provided to code deployed by gcloud.

Obtain ServiceDeploymentId in TrackingParticipant

In WF4, I've created a descendant of TrackingParticipant. In the Track method, record.InstanceId gives me the GUID of the workflow instance.
I'm using the SqlWorkflowInstanceStore for persistence. By default records are automatically deleted from the InstancesTable when the workflow completes. I want to keep it that way to keep the transaction database small.
This creates a problem for reporting, though. My TrackingParticipant will log the instance ID to a reporting table (along with other tracking information), but I'll want to join to the ServiceDeploymentsTable. If the workflow is complete, that GUID won't be in the InstancesTable, so I won't be able to look up the ServiceDeploymentId.
How can I obtain the ServiceDeploymentId in the TrackingParticipant? Alternately, how can I obtain it in the workflow to add it to a CustomTrackingRecord?
You can't get the ServiceDeploymentId in the TrackingParticipant. Basically the ServiceDeploymentId is an internal detail of the SqlWorkflowInstanceStore.
I would either set the SqlWorkflowInstanceStore to not delete the worklow instance upon completion and do so myself at some later point in time after saving the ServiceDeploymentId with the InstanceId.
An alternative is to use auto cleanup with the SqlWorkflowInstanceStore and retreive the ServiceDeploymentId when the first tracking record is generated. At that point the workflow is not complete so the original instance record is still there.