I have an ADO pipeline I'm trying to run as a containerized job. The yaml is setup with the following line:
container: myDockerHub/myRepo:myTag
Where that actually points to a tag in a private repo on DockerHub. The job errors with a message that access to the repo is denied and may require a login. Which is perfectly true. It's a private repo that does require a login. But how do I tell ADO to login to the repo?
I have a service connection setup to DockerHub, and I use docker login successfully in other non-containerized jobs where a script is spinning up a docker image. But since this is using the container global option, I don't see any way to "preface" it with a login instruction. What do I need to get it to work here?
I don't see anything about authentication on the Microsoft documentation on container jobs
You can use your DockerHub service connection with the endpoint property:
container:
image: registry:myimage
endpoint: private_dockerhub_connection
Related
I'm trying to push to GAR from my local machine, but I always get this error:
failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden
First, I've confirmed that my account has the Artifact Registry Writer role through IAM.
I have done the following locally:
# Login with my Google account
gcloud auth login --update-adc --force
# Configure docker to use the gcloud CLI auth helper
gcloud auth configure-docker us-west1-docker.pkg.dev
# docker login for good measure
docker login
# Tag my image (already built)
docker tag myimage us-west1-docker.pkg.dev/myproject/myrepo/myimage
# Push it
docker push us-west1-docker.pkg.dev/myproject/myrepo/myimage
On this final command I get the error above.
I have read all the Google documentation I could find but they all suggest the above steps:
https://cloud.google.com/artifact-registry/docs/docker/pushing-and-pulling
https://cloud.google.com/artifact-registry/docs/docker/troubleshoot
Note: I can't pull either, using the command provided directly from the GCP web UI.
I'm on M1 Mac.
So I was able to solve this problem by completely nuking Docker, specifically with these steps: https://stackoverflow.com/a/69437543/3846032. I couldn't uninstall it by normal means, it would just hang, implying that the problems I was getting were a result of my Docker installation being very broken. Indeed, I managed to follow the above steps on another machine and it worked, which led me to conclude the steps above and my credentials were totally fine.
The 403 was a red herring, it must have come from my local Docker being broken in such a way that it doesn't send properly authenticated requests.
I am getting "can't be pulled" when I use Cloud Code plugin in VS code to build and deploy an image to a local Kubernetes cluster. There are no errors being logged on GCP, but locally I'm getting the following:
- deployment/<redacted> failed. Error: container <redacted> is waiting to start: gcr.io/<redacted>/<redacted>:latest#sha256:<redacted> can't be pulled.
If your GCR registry is a private registry then you need to configure your local Kubernetes cluster with an imagePullSecret to use to authenticate to GCR. The general process is to create a service account in your GCP project, and then configure the corresponding service account key file as the pull secret.
There are a variety of tutorials, and this one looks pretty good.
Can you try gcloud auth list and check if you are using the right account? To switch account use gcloud auth login <account>
Also make sure you have the right permission : gcloud permission to pull GCP image
Once these two things are in place then you should be able to pull the image for GCR.
Is there have any tutorials for creating a service account to GCP Artifact Registry?
i have tried this: https://cloud.google.com/architecture/creating-cicd-pipeline-vsts-kubernetes-engine
... but it is using GCP Container Registry
I do not imagine it should be much different but i keep on getting this:
##[error]denied: Permission "artifactregistry.repositories.downloadArtifacts" denied on resource
BUT the service account i created has the permissions needed (albeit those roles are in beta). i even gave it a very elevated role and still getting this.
when i created the service connect i followed these steps from the documentation linked above:
Docker Registry: https://gcr.io/PROJECT_ID, replacing PROJECT_ID with the name of your project (for example, https://gcr.io/azure-pipelines-test-project-12345).
Docker ID: _json_key
Password: Paste the content of azure-pipelines-publisher-oneline.json.
Service connection name: gcr-tutorial
Any advice on this would be appreciated.
I was having the same issue. As #Mexicoder points out the service account needs the ArtifactRegistryWriter permission. In addition, the following wasn't clear to me initially:
The service connection needs to be in the format: https://REGION-docker.pkg.dev/PROJECT-ID (where region is something like 'us-west2')
The repository parameter to the Docker task (Docker#2) needs to be in the form: PROJECT-ID/REPO/IMAGE
I was able to get it working with the documentation for Container Registry.
my issue was with the repository name.
ALSO the main difference when using Artifact Registry is the permission you need to give the IAM service account. Use ArtifactRegistryWriter. StorageAdmin will be useless.
I wanted to start using containers within by YAML build pipeline in Azure DevOps. The pipeline works just fine if I exclude the following code snippet.
container:
image: my-image-name:1.0
endpoint: my-endpoint-in-ado
When I tried following approach the pipeline validated, but then of course failed authentication since the repository is private
container: my-image-name:1.0
I'm not sure whether I am missing something trivial, but when I contacted colleague from another team he has it implemented in the same way but for him it works.
The error I'm getting via Azure DevOps UI is following (keep in mind that the error is gone if I remove the container section):
EDIT:
I've found out that the problem I am facing is because (for some reason) when adding containers section to resources the engine cannot read anymore the information from the repositories section. On picture below, when I remove lines 7, 29 and 30 everything works fine and the container is pulled in the pipeline. Problem is, that I need that variable from line 29 further in my scripts and as far as I know there is no other way to grab the details of repositories via other variables or any other way than I am already using.
Please follow below steps to check the result.
By reference to this doc: Build and push to Azure Container Registry, we succeed to push my own image to an Azure container registry.
By reference to this doc: Container reference and Container resource, we need to create a Docker Registry service connection which type is Azure Container Registry. And we enable "Grant access permission to all pipelines" option so this service connection can be used in all pipelines without additional authentication. Please note that this service connection should be successfully validated before using it in yaml pipeline.
After these actions, we can succeed to use this container in yaml pipeline, like below.
pool:
vmImage: ubuntu-latest
resources:
containers:
- container: linux
image: edwardregistery.azurecr.io/pipelines-javascript-docker:latest
endpoint: my_acr_connection # reference to a service connection for the private registry
jobs:
- job: a
container: linux # reference
steps:
- script: echo "hello world!"
Update>>I can reproduce your issue An error occurred while loading the YAML build pipeline. Value cannot be null. Parameter name: values when setting variable variables: - name: active_branch value: $[ replace(resources.repositories['test'].ref, 'refs/heads/', '') ]if there are container resource and repository resource in yaml resources. However, it works if you remove container resource and there is only the repository resource. We suggest that you submit it here to contact the product group to investigate this issue further.
I've configured multi container using Docker Compose Task in Azure Pipeline. I could not able to get URL for the multi container application.
Do I need to configure the app service along with the docker compose task?
Please guide!!!
UPDATE
In order to get the application's URL from DockerCompose Task, Can make use of Azure CLI commands provided in the following documentation link suggested by Merlin Liang - MSFT
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-multi-container-yaml#view-deployment-state
Not sure here which URL is you are looking for.
1) If what you means is the browsing URL of your APP, you could find it in Overview tab of app service:
Even though it is a multi-container app, this URL has fixed format, and do not be affected by anything:
http://<your-app-name>.azurewebsites.net
2) If what you want is a integrate URL which used to notify/update the app service once a new version of the image is available.
Just go Container settings => Continuous Deployment => Webhook URL:
Do I need to configure the app service along with the docker compose
task?
This depend on your actual demand. It is not necessary in most scenarios.
Docker compose task used to orchestration your container. Based on your last SO ticket, you just run service. In fact, in Azure Web App for Containers task. it integrate this part:
If you think here it can not satisfied your usage, you could make use of Docker compose task.
Updated in 2020/3/2:
If someone just build and push the containerization app into ACR, without any integrate with Azure app service. At this time, the browsing url should be look like localhost:<port>.
To get exact host name and ip address, just run below commands to get:
az container show --resource-group myResourceGroup --name myContainerGroup --output table