Docker Compose task on Azure DevOps cannot start daemon - docker-compose

I'm unable to run the Docker Compose task on Azure DevOps and every solution I've looked up online, either makes no sense, or does not work for my scenario.
The job output for the failure is:
This is a very simple process, artifacts are copied to a folder during build, and the docker-compose.yml and .dockerfile is added to this directory, which then needs to be run.
One article explained that if you add your docker-compose.yml to the same folder as the files the image will be hosting and the .dockerfile, that it might cause the daemon to fall over and generate this generic error, so I've added a .dockerignore file, but this issues persists.
I'm using a Hosted Agent - Ubuntu-18.04.
My task looks like this:
steps:
- task: DockerCompose#0
displayName: 'Run a Docker Compose command'
inputs:
azureSubscription: 'Test Dev Ops'
azureContainerRegistry: '{"loginServer":"testdevops.azurecr.io", "id" : "/subscriptions/{subscription_key}/resourceGroups/Test.Devops/providers/Microsoft.ContainerRegistry/registries/testdevops"}'
dockerComposeFile: '$(System.DefaultWorkingDirectory)/$(Release.PrimaryArtifactSourceAlias)/test.ng.$(Build.BuildNumber)/dist/testweb/docker-compose-build.yml'
dockerComposeCommand: build
arguments: '--build-arg azure_pat=$(System.AccessToken) --build-arg azure_username=Azure'
The idea here is that this container is composed and delivered straight to Azure's Container Registry.
I have ensured that the user that's running this process, as been granted permissions in that ACR, as well as added the user to the Administrative group in Azure DevOps.
A lot of responses talks about adding the user to the Docker group, but this is a Hosted Agent, not a private agent, so there is no such option.
I have even tried installing Docker CLI before this task, but nothings working.
Am I being daft to think that I can compose in Azure DevOps?
Edit
The contents of my artifacts folder looks something like this:

This error message is extremely misleading. If anyone from Microsoft is looking at this question, please consider making the error more specific, if possible.
It turned out, I missed a semi-colon in a build task that replaced tokens before the build artifacts was pushed from the build output, and because of that, the yaml file had a #{..} token inside of it, which caused the docker-compose to fail.
It had nothing to do with permissions, nor a .dockerignore file, very misleading.

Related

Azure Function on Linux Breaks Node function requiring Node v12 when Deployed from Azure DevOps

I have a Nodejs Azure Function using a timer trigger. It uses some modern Javascript syntax (await, flatMap, etc) that is supported in Node v12.
I've deployed my infrastructure with Terraform and specified the linuxFxVersion as "node|12". So far so good. When I deploy my code from the Azure DevOps using the built-in AzureFunctionApp#1 task, it will cause the function to deploy a new image that is running Node v8. This causes my function to break.
Here is the release definition:
steps:
- task: AzureFunctionApp#1
displayName: 'Azure Function App Deploy: XXXXXXXXX'
inputs:
azureSubscription: 'XXXXXXXXX'
appType: functionAppLinux
appName: 'XXXXXXXXX'
package: '$(System.DefaultWorkingDirectory)/_XXXXXXXXX/drop/out.zip'
runtimeStack: 'DOCKER|microsoft/azure-functions-node8:2.0'
configurationStrings: '-linuxFxVersion: node|12'
You can see I explicitly try to force the linuxFxVersion to remain 'node|12' in the release.
In the release logs, you can watch the release try to set the configuration for linuxFxVersion 2x, once to the wrong image, and the second time to "node|12".
After I release the code, the function will still run, but when I print the node version it shows version 8 and fails at runtime when it hits the unsupported syntax.
If I re-run my terraform script, it will show me that the linuxFxVersion for my function app is now set to 'DOCKER|microsoft/azure-functions-node8:2.0' and it sets it back to "node|12". After that runs, my function now works. If I update my code and deploy again, it breaks again in the same way.
What is even more baffling to me is that this is a v3 function app, which in theory does not support Node v8 at all.
Am I missing something obvious here or is the Function App release task just broken for Linux Functions?
After writing up this whole big question, and proof reading it... I noticed this little snippet in the release task YAML (which I hadn't seen before today as it's a release and uses the AzDO GUI for editing):
runtimeStack: 'DOCKER|microsoft/azure-functions-node8:2.0'
It turns out, if you specify the stack as 'JavaScript' (the options are .NET and JavaScript) the task sets the "runtimeStack" to that string. Which is what gets set in the linuxFxVersion setting on the Function App. Even if you override that setting in the configuration settings.
The fix is to leave the Runtime field blank and then it will respect your settings. Awesome.

AzureFileCopy with Azure DevOps pipeline fails - 'AzCopy.exe exited with non-zero exit

I try to copy ARM templates to storage but failing.
What could wrong with YML?
ERROR:
& "AzCopy\AzCopy.exe" logout
INFO: Logout succeeded.
INFO: AzCopy.exe: A newer version 10.4.3 is available to download
Disconnect-AzAccount -Scope Process -ErrorAction Stop
Clear-AzContext -Scope Process -ErrorAction Stop
##[error]Upload to container: 'arm' in storage account: 'devopsstorageken' with blob prefix: 'test'
failed with error: 'AzCopy.exe exited with non-zero exit code while uploading files to blob storage.'
For more info please refer to https://aka.ms/azurefilecopyreadme
Finishing: AzureFileCopy
YML:
- task: AzureFileCopy#4
inputs:
SourcePath: '$(Build.Repository.LocalPath)/ARMTemplates/CreateSQLServerARM'
azureSubscription: 'TestRG-Conn'
Destination: 'AzureBlob'
storage: 'devopsstorageken'
blobPrefix: 'test'
ContainerName: 'arm'
I try to copy ARM templates to storage but failing. What could wrong
with YML?
Your yml looks right. I guess there might be something wrong with the task itself.
As a workaround we can use the AzureFileCopy#3, in this version we don't need to do any extra job in Azure Web Portal.
And in preview AzureFileCopy#4, there's some difference. We need to make sure the Service Principal we use in this task have access to the Storage Account. For me, I need to navigate to Access control page and Add a role assignment(Storage Blob Data Contributor/owner role) to my Service Principal/Managed Identity:
So that the AzureFileCopy version4 could also work on my side.
I also had to go back from AzureFileCopy#4 to AzureFileCopy#3. Since I am using Azure DevOps Pipelines, I already have the Contributor role for my Storage Account via my Service Connection.
However, I still have issues as soon as I configured TLS 1.2 as a requirement for my Storage Account. Currently, I can only work around the problem if I also allow TLS 1.0 here. The TLS option is the only way to get the task runnig.
Artifact PathPlease check the Source* correctly. Just to troubleshoot, provide the absolute path of the artifact and then try to deploy, you will able to do it.
Once you succeed, reverse engineer the things and work on the absolute path of the Source.
It's working fine on Version 2.
Note: If you are using Extract Files task, then try to replace it with Unzip task
In addition to #lolance answer, watch out for your source path parameter. Do not use * after your trailing slash that is folder/build/*.

Forcing data seeding during release stage of the pipeline

I am having trouble seeding data into thre database using devops. I have a YAML with the following build step (I've stripped out irrelevant steps):
- task: CmdLine#2
inputs:
script: |
dotnet tool install --global dotnet-ef --version 3.0
dotnet tool restore
dotnet ef migrations script -p $(Build.SourcesDirectory)/$(My.SQLProject)/$(My.SQLProject).csproj -o $(Build.ArtifactStagingDirectory)/migrations/script.sql -i
- task: PublishBuildArtifacts#1
This creates a migration SQL script just fine and pops it into drop.
During release I create the database using an ARM deployment task I then run the SQL script:
- task: SqlAzureDacpacDeployment#1
inputs:
azureSubscription: 'my-sub'
ServerName: 'my-server.database.windows.net'
DatabaseName: 'my-db'
SqlUsername: 'my-sqluser'
SqlPassword: 'my-password'
deployType: SqlTask
SqlFile: '$(Pipeline.Workspace)/drop/migrations/script.sql'
This works fine - the schema in the DB is created.
I then create the App Service with connection string and the App Service connects to the DB just fine.
The bit I can't seem to get to work is the data seeding. I've googled lots and there are plenty of articles that talk about creating migrations and creating SQL scripts and then running the script in Devops. And there are plenty that talk about seeding data outside of Devops, but the bit I'm struggling with is how to get it to seed the data in Devops. One odd thing I have noticed is that if I re-run the build / deploy YAML, it then seeds the data without me having to tell it. So, I guess there are two questions:
Is data seeding something that MUST (or SHOULD) be done in the App Service code during App Service Startup, or is it something that should be instigated during the release pipeline in Devops? (I'm not the App Service developer. The Dev says he thinks it should be happening at App Startup. It doesn't so my thinking is that if he's missing something in his code, perhaps I can say "don't worry, I can kick off the data seeding myself in Devops".
If it should be done in Devops, how should it be done? I would have thought that "dotnet ef database update -p " ought to do it, but that doesn't seem to work in the release pipeline.
Many thanks
After some experimentation I have an answer. It's not pretty but it works. I don't know why the dev's seeding isn't working, but I can force it in Devops like this:
Deploy the App Service (AzureWebApp#1)
Add the Connection Strings (AzureAppServiceSettings#1)
Re-deploy the App Service (AzureWebApp#1)
That seems to "force" the seeding. Job done.

Azure Pipepline with task Sonarqube https

I added a Sonarqube task into my azure build pipeline, in order to login to my sonarqube server I need to run a command, which uses trunst store ssl.
my pipeline looks just like this:
- task: SonarSource.sonarqube.15B84CA1-B62F-4A2A-A403-89B77A063157.SonarQubePrepare#4
displayName: 'Prepare analysis on SonarQube'
inputs:
SonarQube: abc-sonarqube
scannerMode: CLI
configMode: manual
cliProjectKey: 'abc'
cliProjectName: 'abc'
cliSources: src
extraProperties: |
sonar.host.url=https://sonarqube.build.abcdef.com
sonar.ce.javaAdditionalOpts=-Djavax.net.ssl.trustStore=mvn/sonar.truststore -Djavax.net.ssl.trustStorePassword=changeit
I am not sure, if this command "sonar.ce.javaAdditionalOpts=-Djavax.net.ssl.trustStore=mvn/sonar.truststore -Djavax.net.ssl.trustStorePassword=changeit" correct is.
I got the error "API GET '/api/server/version' failed, error was: {"code":"UNABLE_TO_VERIFY_LEAF_SIGNATURE"}
"
PS: my project is angular project.
any solutions?
Azure Pipepline with task Sonarqube https
This issue should be related in how the configure task works. So, even if we add the certificate to the java trustore, the task that sets the configuration uses a different runtime (not java at least) to communicate with the server, that’s why you still get that certificate error.
To resolve this issue, you could try to:
set a global variable, NODE_EXTRA_CA_CERTS, and set it to a copy of
the root cert we had stored locally in a directory. See this article.
Check the related ticket for some more details.
Hope this helps.

Running Powershell scripts on Web App machine

I have an Azure web app. This web app has a QA deployment slot for pre-production testing. When I check in my code from VS, I have it setup to build and deploy to the QA deployment slot. This works great. However, a few configurations need to be updated in the QA web app so the application points to the correct service endpoints (i.e. not dev). To do this, my initial approach was to add a PS task to the Release that unzips my deployment zip, updates the configuration files, rezips them and then allows the Release flow to deploy the updated zip. This works locally, but running into filename length issues on the server when unzipping, which I can't change.
Now I'm trying to just include my update PS scripts in my deployment package, and then run the scripts AFTER the deployment has occurred. So, I'm looking at this Powershell on Target Machines task to run a PS on the QA slot server to update configurations. However, it's asking for Machines, which would be the server name of the slot server. I don't have that. I also don't know where to get it. I also don't have the path to the PS scripts once I have the server name. I dumped out the server variables and none of them help me, unless there is a cmdlet to look up environments that I'm not aware of.
System.DefaultWorkingDirectory: 'C:\a\2ed23b64d'
System.TeamFoundationServerUri: 'https://REDACTED.vsrm.visualstudio.com/DefaultCollection/'
System.TeamFoundationCollectionUri: 'https://REDACTEDvisualstudio.com/DefaultCollection/'
System.TeamProject: 'REDACTED'
System.TeamProjectId: 'REDACTED'
Release.DefinitionName: 'REDACTED'
Release.EnvironmentUri: 'vstfs:///ReleaseManagement/Environment/46'
Release.EnvironmentName: 'QA'
Release.ReleaseDescription: 'Triggered by REDACTED Build Definition 20160425.4.'
Release.ReleaseId: '31'
Release.ReleaseName: 'Release-31'
Release.ReleaseUri: 'vstfs:///ReleaseManagement/Release/31'
Release.RequestedFor: 'Matthew Mulhearn'
Release.RequestedForId: ''
Agent.HomeDirectory: 'C:\LR\MMS\Services\Mms\TaskAgentProvisioner\Tools\agents\1.98.1'
Agent.JobName: 'Release'
Agent.MachineName: 'TASKAGENT5-0020'
Agent.Name: 'Hosted Agent'
Agent.RootDirectory: 'C:\a'
Agent.WorkingDirectory: 'C:\a\SourceRootMapping\REDACTED'
Agent.ReleaseDirectory: 'C:\a\2ed23b64d'
Anyone have any idea, or a better approach, to accomplish what I'm attempting?