In Azure DevOps, I have a pipeline, where I need the logs of a specific task. How do I find out which log ID i need to fetch it?
eg. on UI this is the endpoint: https://dev.azure.com/myorg/myspace/_build/results?buildId=1234&view=logs&j=899c4bff-9ac3-12de-4775-50e701812cb4&t=bc949ec8-c945-5220-1d40-d8ea7dab4bda
which contains the job and task ids, but these are useless when querying logs.
Same example, url to the logs I need: https://dev.azure.com/myorg/cd642969-da00-4584-ab6a-4b6021c47eff/_apis/build/builds/1234/logs/24
The number of tasks depends on what parameters I set, so the number 24 changes. How do I calculate the log id, if I know the name / id of the job and task?
Should I go through all the ~100 task logs and grep for match in the first lines for the task name? (troll)
How do I calculate the log id, if I know the name / id of the job and task?
To get the logid with task name, you could try to use the following Rest API: Timeline - Get
GET https://dev.azure.com/{organization}/{project}/_apis/build/builds/{buildId}/timeline?api-version=6.0
You could search with task name. Then you could get the target logid:
Related
I want to check whether the database/Server is Online before I kick off a pipeline. In the database is down I want to cancel the pipeline processing. I also would like to log the results in a table.
format (columns) : DBName Status Date
If the DB/Server is down then I want to send an email to concerned team with formatted table showing which DB/Servers are down.
Approach:
Run a query on each of the servers. If there is a result, then format output as shown above. I am using ADF pipeline to achive this. My issue is how do I combine various outputs from different servers.
For e.g.
Server1:
DBName: A Status: ONLINE runDate:xx/xx/xxxx
Server2:
DBName: B Status: ONLINE runDate:xx/xx/xxxx
I would like to combine them as follows:
Server DBName Status runDate
1 A ONLINE xx/xx/xxxx
2 B ONLINE xx/xx/xxxx
Use this to update the logging table as well as in the email if I were to send one out.
Is this possible using the Pipeline activities or do I have to use mapping dataflows?
I did similar work a few weeks ago. We make an API where we put all server-related settings or URL endpoint which we need to ping.
You don't require to store username-password (of SQL Server) at all. When you ping the SQL server, it will timeout if it isn't online. If it's online it will give you password related error. This way you can easily figure out whether it's up and running.
AFAIK, If you are using azure-DevOps you can use your service account to log into the SQL server. If you have set up an AD to log into DevOps, this thing can be done in the build script.
Both way you will be able to make sure whether SQL Server is Up and Running or not.
You can have all the actions as tasks in a yaml pipeline
You need something like below:
steps:
task: Check database status
register: result
task: Add results to a file
shell: "echo text >> filename"
task: send e-mail
when: some condition is met
There are several modules to achieve what you need. You need to find the right modules. You can play around with the flow of tasks by registering results and using the when clause.
I have a pipeline with multiple stages that deploys groups of virtual machines.
And registers one to and azure pipelines environment.
Then I want to target that registered VM in a deployment job.
I have a problem to target that resource by name as the resource does not exists in the environment at queue time so I cannot even disable the stage before running.
So my next option is targeting by tags.
But I saw no option in the registration script to define tags at registering time.
So my pipeline flow has a manual step between stages to go to the environment and tag the resource.
Then I can trigger the deployments stage of the pipeline and it continues ok.
So my question is:
There is any way of disabling the resource evaluation at queuetime or anyway to tag resoureces in the environment programmatically?
Thanks
But I saw no option in the registration script to define tags at
registering time.
When running the Registration script, there will be a step: Enter Environment Virtual Machine resource tags? (Y/N) (press enter for N), at this time you need to enter Y, and then in the next step: Enter Comma separated list of tags (e.g web, db) define the tag for the resource.
Update:
You can add --addvirtualmachineresourcetags --virtualmachineresourcetags "<tag>" to the registration script.
You can refer to this case for details.
I want to create a task group where Azure Resource Manager Connection is filled with a parameter:
However, this is not possible to do in portal as a validation force to fill it with working value. So I tried to export the task group as json and them modify it and import but then I got this message saving release pipeline:
Is there a way to overcome this? I understood that this is security check (which btw doesn't work in yaml pipelines becauce there you can use Azure Reource Manager connection even if you not allowed). However, in this way it limits usage of task group to a single connection.
EDIT:
Kevin thank you for your anser. I tired it but it didn't work for me.
So I have the connection rg-the-code-manual:
I created a variablewith it:
But when I tried to use it I have a validation error:
Based on my test, when I set the variable as the Azure Resource Manager Connection name, I could reproduce the same issue.
For example:
To solve this issue, you need to set the variable value in release pipeline.
Then you could save the release pipeline successfully.
On the other hand, you could also set the default value for the variable in Task Group.
In this case, the task group will use the default value in release pipeline. And the parameter will also exist in the task group task, you could directly select the value in the drop downlist.
Note: you need to make sure that the Service connection name is valid.
I want to update a Pipeline with the Definitions - Update REST API call.
That works fine, but when I want to add a custom task (self made build pipeline task extension) then I struggeling to find the correct task reference id:
Invoke-RestMethod : {"$id":"1","innerException":null,"message":"The pipeline is not valid. A task is missing. The pipeline references a task called '7f1fe94f-b811-4ba1-9d6a-b6c27de758d7'. This
usually indicates the task isn't installed, and you may be able to install it from the Marketplace: https://marketplace.visualstudio.com. (Task version 1.*, job 'Job_1', step ''.),Job Job_1: Step
has an invalid task definition reference. A valid task definition reference must specify either an ID or a name and a version specification with a major version
specified.","typeName":"Microsoft.TeamFoundation.DistributedTask.Pipelines.PipelineValidationException,
Microsoft.TeamFoundation.DistributedTask.WebApi","typeKey":"PipelineValidationException","errorCode":0,"eventId":3000}
I check out the registrationId of my custom task with the Installed Extensions - List REST API call. But it is not the correct one. (7f1fe94f-b811-4ba1-9d6a-b6c27de758d7)
I also add the custom task manually to a pipeline and read out the correct task refernce id with the Definitions - Get REST API call. I could find the id in:
$pipeline.process.phases.steps.task.id -> 2c7efb3e-3267-4ac6-addc-86e88a6dab34
But how can I read out this id without adding the custom task manually?
This id is obviously dynamic and changes everytime when the custom task get installed, so there must be a way to get this refernce.
The task id has not changed every time when the custom task gets installed, but he existed in task.json of the task:
{
"id": "2f159376-f4dk-4311-a49c-392f9d534113",
"name": "TaskName",
"friendlyName": "Task Name",
Another option is to use this api:
https://dev.azure.com/{organiztion}/_apis/distributedtask/tasks
You will get a long list of all the tasks, search your task and you will see the id.
I am using https://wiki.jenkins.io/display/JENKINS/EnvInject+Plugin 1.93.1 in my Jenkins installation.
I need to fetch the EnvInject properties configuration for every job I have, but I can't seem to find a way to do this in the job rest api.
The way I figured to do this is by fetching the last build for every job and then hit the injectedEnvVars/api/.
This strategy is not optimal because I have to do a request for every job, and that is taking too long (4000+ jobs).
Am I missing something? Is there a way to fetch the envInject properties together with the job information?
If you think send 4000 HTTP requests is not effective, You can iterate the JENKINS_HOME folder on Jenkins Master.
Following picture illustrate the structure for injectedEnvVars.txt which stores the value of EnvInject.
.jenkins is JENKINS_HOME folder
fetch-envinject-value is jenkins job
builds/1 is the 1st job build
builds/1/injectedEnvVars.txt is all environment variables for this job build.