How to fetch all the deployments completed after the given timestamp using Azure Devops REST Api? - azure-devops

So there is List Release Deployments API. It allows to get all the deployments started after the given timestamp. However, how to get those completed after it?
Rationale
Suppose I want to write a scheduled job that fetches information about completed deployments and pushes it to an Azure Application Insights bucket (for example for DORA metrics). I do not understand how it can be done easily without the ability to filter by the completion date. A relatively hard way would be to fetch by the started date, notice all the deployments inProgress or notDeployed and record them in a dedicated database. Then on the next polling cycle fetch new deployments and all those still recorded in the aforementioned database. This is much more complicated than it could have been with the ability to filter by the completion date.
Am I missing anything here? Maybe there is a simpler way (possibly using another API) that I just do not see?
EDIT 1
By the way, the hard way is even harder than I thought, since apparently there is no way to fetch release deployments by their Id.
EDIT 2
The plot thickens. If a stage has post deployment approvals, then the stage is reported as inProgress, even though de facto it has already been deployed. So an API that just filters by completion date would omit such deployments. It has to be with an option to include such deployments.

Related

Access lastScheduledTime from cron workflow

I'm trying to implement automatic backfills in Argo workflows, and one of the last pieces of the puzzle I'm missing is how to access the lastScheduledTime field from my workflow template.
I see that it's part of the template, and I see it getting updated each time a workflow is scheduled, but I can't find a way to access it from my template to calculate how many executions I might have missed since the last time the scheduler was online.
Is this possible? Or maybe, is this the best way to implement this functionality on Argo?

Azure DevOps API work with releases states

I want to get releases, which state is only
in progress, but when I send request https://vsrm.dev.azure.com/{organization}/{project}/_apis/release/releases?definitionId={definitionId}&api-version=5.1 to get all releases and see, which field is responsible for release state I see, that all releases have status active.
I thought I must use https://vsrm.dev.azure.com/{organization}/{project}/_apis/release/releases?definitionId={definitionId}&statusFilter=active&api-version=5.1, but that is wrong way for my target task.
What should I use? Thanks you in advance!
That's because you're looking at the wrong API. You need to use the deployments API and specify deploymentStatus for querying the status of deployments.
The API you're using queries the overall status of a given release, not the individual stages. The JSON returned contains information about the stages, but you can't filter the stages based on their current status with that API.

How to (properly) Create Jobs On Demand

What I would like to do
I would like to create a Kubernetes workflow where users could POST jobs whenever they wanted, and they might do it at any time, not necessarily scheduling anything (CronJobs), or specifying parallelism or completion requirements, i.e., users could create Jobs on demand.
How I would do it right now
The way I'm thinking about accomplishing this is by simply applying the Jobs to the Kubernetes cluster (I also have to make sure the job doesn't have the same name of a current one because otherwise Kubernetes will think it's a mistake and won't create another one). However, this feels improper because the Jobs will be kind of scattered on the cluster and I would lose control over them (though Kubernetes would supposedly automatically manage them optimally).
Is there a better, proper a way?
I imagine a more proper way of configuring all this is to create some sort of Deployment and Service on top of the Jobs, but is that an existing feature on Kubernetes? Huge companies probably have had this problem in the past so I wonder: what are the best practices for this Kubernetes Jobs On Demand use case?
Not a full answer but you might be interested in this project: https://github.com/ivoscc/kubernetes-task-runner.
It provides an API to launch one-time tasks as Jobs on a Kubernetes cluster, handles input/output files via GCS and periodically cleans up finished Jobs.

Jenkins Workflow API - stage status

What information are we able to query on a workflow job? Anything regarding a particular build's stage status (succeeded, failed, hasn't reached yet, aborted, etc.)? I see we can interact with the input step using this method, but where can we find what metadata, if anything, can be obtained about our builds?
The REST exported API for builds (…/job/…/…/api/json?tree=…) is not very extensive yet. You can get some information about nodes in the flow graph (steps, and some associated block nodes—the stuff you see in Workflow Steps). It is possible to extract some information about stages from that, albeit not easily. Much more is available from the Java API.

MS CRM recursive workflow and performance

I’m about to write a workflow in CRM that calls itself every day. This is a recursive workflow.
It will run on half a million entities each day and deactive the record if it was not been upodated in the past 3 days.
I’m worried about performance has anyone else done this.
I haven't personally implemented anything like this, but that's 500,000 records that are floating around in the DB that the async service has to keep track of, which is going to tax your hardware. In addition, CRM keeps track of recursive workflow instances. I don't have the exact specs in front of me, but if a workflow calls itself a set number of times within a certain timeframe, CRM will kill the workflow.
Could you just write a console app that asks the Crm Service for records that haven't been updated in three days, and then deactivate them? Run it as a scheduled task once a day, and then your CRM system doesn't have the burden of keeping track of all those running workflow instances.
EDIT: Ah, I see now you might have been thinking of one workflow that runs on all the records as opposed to workflows running on each record. benjynito's advice makes sense if you go this route, although I still think a scheduled task would be more appropriate than using workflow.
You'll want to make sure your workflow is running in non-peak hours. Assuming you have an on-premise installation you should be able to get away with that. If you're using a hosted instance, you might be worried about one organization running the workflow while another organization is using the system. Use the timeout and maybe a custom workflow activity, if necessary, to force the start time to a certain period.
I'm assuming you'll be as efficient as possible in figuring out which records to deactivate. (i.e. Query Expression would only bring back the records you'll be deactivating).
The built-in infinite loop-protection offered by CRM shouldn't kill your workflow instances. It stops after a call depth of 8, but it resets to 1 if no calls are made for an hour. So the fact that you're doing this once a day should make you OK on the recursive workflow front.