Azure Devops / Kudusync deploys wrong package to Azure Function - azure-devops

We have two Azure functions, let's call them A and B, that we deploy using Azure Devops. Looking at the deployment logs, first the right package is deployed followed by a deploy where the contents of Function App A are deployed to Function app B. How this happens is a mystery.
The strange things is, looking at the logs and what is inside Kudu, they don't match up. For example, the latest deploy logs will say this:
[
{
"log_time": "2021-09-17T10:57:17.3180234Z",
"id": "",
"message": "Command: \"D:\\home\\site\\deployments\\tools\\deploy.cmd\"",
"type": 0,
"details_url": null
},
{
"log_time": "2021-09-17T10:57:18.5999063Z",
"id": "",
"message": "Handling Basic Web Site deployment.",
"type": 0,
"details_url": null
},
{
"log_time": "2021-09-17T10:57:24.8723981Z",
"id": "",
"message": "Creating app_offline.htm",
"type": 0,
"details_url": null
},
{
"log_time": "2021-09-17T10:57:24.9036384Z",
"id": "",
"message": "KuduSync.NET from: 'D:\\local\\Temp\\zipdeploy\\extracted' to: 'D:\\home\\site\\wwwroot'",
"type": 0,
"details_url": null
},
{
"log_time": "2021-09-17T10:57:25.0941627Z",
"id": "",
"message": "Deleting file: 'OurCompany.Api.B.dll'",
"type": 0,
"details_url": null
},
"log_time": "2021-09-17T10:57:25.1557977Z",
"id": "",
"message": "Copying file: 'extensions.json'",
"type": 0,
"details_url": null
},
{
"log_time": "2021-09-17T10:57:25.1718472Z",
"id": "",
"message": "Copying file: 'OurCompany.Api.A.dll'",
"type": 0,
"details_url": null
},
{
"log_time": "2021-09-17T10:57:27.4441118Z",
"id": "",
"message": "Deleting app_offline.htm",
"type": 0,
"details_url": null
},
{
"log_time": "2021-09-17T10:57:27.5223301Z",
"id": "",
"message": "Finished successfully.",
"type": 0,
"details_url": null
}
]
But when I go to Kudu on the server of the OurCompany.Api.B function app and I look at the actual contents in D:\\local\\Temp\\zipdeploy\\extracted I see no OurCompany.Api.A.dll - only OurCompany.Api.B.dll. Note that the timestamp of the files in the extracted folder predate the second deployment, so they are from the first and CORRECT deploy.
If I look at the contents inside D:\\home\\site\\wwwroot I DO see the incorrect OurCompany.Api.A.dll but where this is coming from I have no idea.
So... how can the deploy logs mention the copying of incorrect dll files while they are not in the folder on the server, and never were there because the timestamps don't add up? We've been working on this issue for days now without succes.

Apparantly this is happening because both Functions are hosted on the same App Service plan and thus using the same Azure File Share. See this thread https://github.com/projectkudu/kudu/issues/3333.

Related

How to set/update test run owner in Azure Devops Rest API?

I am creating a test run in Azure Devops with Rest API. A sample POST request I make is like:
{
"name": "Test Run Name",
"automated": true,
"plan": {
"id": 11111111,
"name": null,
"url": null,
"state": null,
"iteration": null
},
"pointIds": [
222222222222
],
"build": {
"id": "2222233455",
"buildNumber": "buildNumberjlkajdlajsldj",
"uri": "vstfs:///Build/Build/2222222222",
"sourceBranch": "refs/pull/22222/merge",
"definition": {
"id": "2222"
}
},
"buildConfiguration": {
"id": 3333333,
"number": "buildNumberjlkajdlajsldj",
"uri": "vstfs:///Build/Build/222222222222"
},
"owner": {
"id": "44444444-2222-bbbb-aaaa-1111111111",
"descriptor": "aad.AAAAAAAAAAAAAAAAAAAAAAAAAA"
}
}
But here owner information is ignored and owner is assigned as authorized user of the requester. The requested account have necessary permissions.
Do I make something wrong or owner information cannot be assigned to another user?

Arm template deployment fail with 409 error for one specific storage account

I use arm template to deploy a storage account. However, I got an error saying: StorageAccountAlreadyExists: The storage account named xxx already exists.
My release pipeline is set to incremental, so shouldn't really show this error.
I changed storage account name to a new one, not only it worked the first time, but I can keep on deploying the same pipeline and no error ever thrown out.
Looks like it is something specific to this account, however, I can't see anything special. The arm template we use is also quite normal (something we got from official examples before).
{
"$schema": "http://schema.management.azure.com/schemas/2019-06-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"StorageDescriptor": {
"type": "string",
"defaultValue": "StorageAccount",
"metadata": {}
},
"StorageAccountName": {
"type": "string",
"defaultValue": "[toLower(concat(parameters('StorageDescriptor'), resourceGroup().name))]",
"metadata": { "Description": "Override name for the storage account" }
},
"StorageType": {
"type": "string",
"defaultValue": "Standard_LRS",
"allowedValues": [
"Standard_LRS",
"Standard_ZRS",
"Standard_GRS",
"Standard_RAGRS",
"Premium_LRS"
]
},
"Environment": {
"type": "string",
"defaultValue": "PreProd",
"metadata": { "description": "PreProd or Prod" }
}
},
"variables": {
},
"resources": [
{
"name": "[parameters('StorageAccountName')]",
"type": "Microsoft.Storage/storageAccounts",
"location": "[resourceGroup().location]",
"apiVersion": "2019-06-01",
"dependsOn": [],
"tags": {
"displayName": "Web Job Storage Account"
},
"properties": {
"accountType": "[parameters('StorageType')]"
}
}
],
"outputs": {
}
}
Even though your release pipeline is set to incremental, the storage account name must be unique every time you deploy. Refer to: here.
Arm template deployment fail with 409 error for one specific storage account
You need to check if the storage account attributes have been changed through the Azure/PowerShell portal by somebody else, and are different than the ones specified on the ARM template.
To resolve this issue, please try to export the template and update it in the Azure devops repo:
Then, we could update this new exported template file as you want and deploy with it.
As test, I could keep on deploying the same pipeline and no error ever thrown out.

Azure Blob Storage Deployment: Stored Access Policy gets deleted

Context:
I deploy a storage account as well as one or more containers with the following ARM template with Azure DevOps respectively a Resource Deployment Task:
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"storageAccountName": {
"type": "string",
"metadata": {
"description": "The name of the Azure Storage account."
}
},
"containerNames": {
"type": "array",
"metadata": {
"description": "The names of the blob containers."
}
},
"location": {
"type": "string",
"metadata": {
"description": "The location in which the Azure Storage resources should be deployed."
}
}
},
"resources": [
{
"name": "[parameters('storageAccountName')]",
"type": "Microsoft.Storage/storageAccounts",
"apiVersion": "2018-07-01",
"location": "[parameters('location')]",
"kind": "StorageV2",
"sku": {
"name": "Standard_LRS",
"tier": "Standard"
},
"properties": {
"accessTier": "Hot"
}
},
{
"name": "[concat(parameters('storageAccountName'), '/default/', parameters('containerNames')[copyIndex()])]",
"type": "Microsoft.Storage/storageAccounts/blobServices/containers",
"apiVersion": "2018-03-01-preview",
"dependsOn": [
"[parameters('storageAccountName')]"
],
"copy": {
"name": "containercopy",
"count": "[length(parameters('containerNames'))]"
}
}
],
"outputs": {
"storageAccountName": {
"type": "string",
"value": "[parameters('storageAccountName')]"
},
"storageAccountKey": {
"type": "string",
"value": "[listKeys(parameters('storageAccountName'), '2018-02-01').keys[0].value]"
},
"storageContainerNames": {
"type": "array",
"value": "[parameters('containerNames')]"
}
}
}
Input can be e.g.
-storageAccountName 'stor1' -containerNames [ 'con1', 'con2' ] -location 'westeurope'
In an next step I create Stored Access Policies for the containers deployed.
Problem:
The first time I do that everything works fine. But if I execute the pipeline a second time the Stored Access Policies gets deleted by the deployment of the template. The storage account itself with its containers and blobs are not deleted (as it should be). This is unfortunate because I want to keep the Stored Access Policy with its starttime and expirytime as deployed the first time, furthermore I expect that the SAS also become invalid (not tested so far).
Questions:
Why is this happening?
How can I avoid this problem respectively keep the Stored Access Policies?
Thanks
After doing some investigation this seems to be by design. When deploying ARM templates for storage accounts the PUT operation is used, i.e. elements that are not specified within the template are removed. As it is not possible to specify Shared Access Policies for containers within an ARM template for Storage Accounts existing ones get deleted when the template is redeployed...

Require build to Contain Unit tests VSTS

I have 1 question
We use VSTS in our company. Now I want to require my developers to write unit tests to each project
My question is: If I can to build in Team Service Web Portal to require contain Unit tests, and if project doesn't has any unit tests - set build fail?
Thank you.
You can add a PowerShell task to check if the build project contains unit tests. Detail steps as below:
Add a PowerShell task after VS test task, and get VS test build log by timeline REST API. For TFS the rest api format should be:
GET http://{tsserver}:8080/tfs/{collection}/{project}/_apis/build/builds/{buildId}/timeline?api-version=3.0
Then search the log for VS test task, and you can get the detail VS test build log by the url. Such as below example, VS test task build log can be find in http://wxv-xindo-12r2:8080/tfs/DefaultCollection/5dfb8b33-9949-4187-9d09-474c8fe87238/_apis/build/builds/1477/logs/5.
{
"id": "c43e4f8f-3db9-4fdc-90c0-1153f165b9a9",
"parentId": "39d66172-979a-46e8-a04d-fe8e4e77da43",
"type": "Task",
"name": "Test Assemblies",
"startTime": "2018-05-25T02:31:11.177Z",
"finishTime": "2018-05-25T02:31:17.133Z",
"currentOperation": null,
"percentComplete": null,
"state": "completed",
"result": "succeeded",
"resultCode": null,
"changeId": 17,
"lastModified": "0001-01-01T00:00:00",
"workerName": "mypc",
"order": 5,
"details": null,
"errorCount": 0,
"warningCount": 2,
"url": null,
"log": {
"id": 5,
"type": "Container",
"url": "http://tfsserver:8080/tfs/DefaultCollection/5dfb8b33-9949-4187-9d09-474c8fe87238/_apis/build/builds/1477/logs/5"
},
"task": {
"id": "ef087383-ee5e-42c7-9a53-ab56c98420f9",
"name": "VSTest",
"version": "2.0.55"
},
"issues": [
{
"type": "warning",
"category": "General",
"message": "",
"data": {
"type": "warning",
"code": "002004"
}
},
{
"type": "warning",
"category": "General",
"message": "No test assemblies found matching the pattern: **\\release\\*test*.dll,!**\\obj\\**.",
"data": {
"type": "warning"
}
}
]
}
Then check if the VS test task has the warning:
[warning]No test assemblies found matching the pattern: '**\*test*.dll;-:**\obj\**'
If the VS test build log contains above warning, fail the PowerShell task by exit 1.

TFS/VSTS vNext Build and Release Logs Location

Are the build and release logs saved locally on the Build Agent machine? I know that I can manually click on the "Download all logs as zip" link for each build and release, but if I want to automatically send these logs to someone else (or bulk send them), is there another way to find them (either on the Build Agent machine or in the database somewhere)?
Thanks!
Both for TFS and VSTS, the build/release logs are located in TFS/VSTS server (no business with what the agent is used).
In build/release retention, you can set the policies to keep the builds/releases. And it default keep the latest 30 days' builds and releases.
Except the way to click "Download all logs as zip" button to get a build/release logs, you can also get build/release logs by REST API. Such as below example:
GET https://account.visualstudio.com/DefaultCollection/Git2/_apis/build/builds/2373/timeline?api-version=2.0
And you can get each build step log in the response, such as:
{
"records": [
{
"id": "d2c6b274-40fe-4727-85b6-eb92fb4f6009",
"parentId": "ff7265dc-abe3-5e6a-6194-76bb88f00044",
"type": "Task",
"name": "Initialize Job",
"startTime": "2018-01-15T05:30:25.6Z",
"finishTime": "2018-01-15T05:30:26.0233333Z",
"currentOperation": null,
"percentComplete": null,
"state": "completed",
"result": "succeeded",
"resultCode": null,
"changeId": 8,
"lastModified": "0001-01-01T00:00:00",
"workerName": "V-myPC",
"order": 2,
"details": null,
"errorCount": 0,
"warningCount": 0,
"url": null,
"log": {
"id": 2,
"type": "Container",
"url": "https://account.visualstudio.com/DefaultCollection/f7855e29-6f8d-429d-8c9b-41fd4d7e70a4/_apis/build/builds/2373/logs/2"
},
"task": null
},
...
],
"lastChangedBy": "00000002-0000-8888-8000-000000000000",
"lastChangedOn": "2018-01-15T05:30:40.947Z",
"id": "4b4280d4-5358-4238-ab95-d44475c92bc9",
"changeId": 19,
"url": "https://account.visualstudio.com/DefaultCollection/f7855e29-6f8d-429d-8c9b-41fd4d7e70a4/_apis/build/builds/2373/Timeline/4b4280d4-5358-4238-ab95-d44475c92bc9"
}
As the above response, you can find the Initialized Job step by the url https://account.visualstudio.com/DefaultCollection/f7855e29-6f8d-429d-8c9b-41fd4d7e70a4/_apis/build/builds/2373/logs/2.