Create Service Connection from Azure DevOps to GCP Artifact Registry - azure-devops

Is there have any tutorials for creating a service account to GCP Artifact Registry?
i have tried this: https://cloud.google.com/architecture/creating-cicd-pipeline-vsts-kubernetes-engine
... but it is using GCP Container Registry
I do not imagine it should be much different but i keep on getting this:
##[error]denied: Permission "artifactregistry.repositories.downloadArtifacts" denied on resource
BUT the service account i created has the permissions needed (albeit those roles are in beta). i even gave it a very elevated role and still getting this.
when i created the service connect i followed these steps from the documentation linked above:
Docker Registry: https://gcr.io/PROJECT_ID, replacing PROJECT_ID with the name of your project (for example, https://gcr.io/azure-pipelines-test-project-12345).
Docker ID: _json_key
Password: Paste the content of azure-pipelines-publisher-oneline.json.
Service connection name: gcr-tutorial
Any advice on this would be appreciated.

I was having the same issue. As #Mexicoder points out the service account needs the ArtifactRegistryWriter permission. In addition, the following wasn't clear to me initially:
The service connection needs to be in the format: https://REGION-docker.pkg.dev/PROJECT-ID (where region is something like 'us-west2')
The repository parameter to the Docker task (Docker#2) needs to be in the form: PROJECT-ID/REPO/IMAGE

I was able to get it working with the documentation for Container Registry.
my issue was with the repository name.
ALSO the main difference when using Artifact Registry is the permission you need to give the IAM service account. Use ArtifactRegistryWriter. StorageAdmin will be useless.

Related

Why is my GCP image failing to deploy to local kubernetes?

I am getting "can't be pulled" when I use Cloud Code plugin in VS code to build and deploy an image to a local Kubernetes cluster. There are no errors being logged on GCP, but locally I'm getting the following:
- deployment/<redacted> failed. Error: container <redacted> is waiting to start: gcr.io/<redacted>/<redacted>:latest#sha256:<redacted> can't be pulled.
If your GCR registry is a private registry then you need to configure your local Kubernetes cluster with an imagePullSecret to use to authenticate to GCR. The general process is to create a service account in your GCP project, and then configure the corresponding service account key file as the pull secret.
There are a variety of tutorials, and this one looks pretty good.
Can you try gcloud auth list and check if you are using the right account? To switch account use gcloud auth login <account>
Also make sure you have the right permission : gcloud permission to pull GCP image
Once these two things are in place then you should be able to pull the image for GCR.

How to Connect to Cloud SQL Through Kubernetes

This is driving me crazy, been trying to get this to work for 3 days now: I'm trying to connect a kubernetes deployment to my Cloud SQL database in GCP.
Here's what I've done so far:
Set up the cloud SQL proxy to work as a sidecar in my deployment
Created a GKE service account and attached it to my deployment
Bound the GKE service account to my GCP service account
Edited to the service account (to what I can tell) is owner permission
Yet what I run the deployment in GKE I still get:
the default Compute Engine service account is not configured with sufficient permissions to access the Cloud SQL API from this VM. Please create a new VM with Cloud SQL access (scope) enabled under "Identity and API access". Alternatively, create a new "service account key" and specify it using the -credential_file parameter
How can I fix this? I can't find any documentation on how to set up the service account to have the correct permissions with Cloud SQL or how to debug this issue. Every single tutorial I can find ends with "bind your service account" and then stops. Nothing that describes what permissions are needed, and nothing about how to actually connect to the DB from my code (how would my code talk to the proxy?).
Please help
FINALLY got it to work!
Two major pieces that the main article on this (cloud.google.com/sql/docs/mysql/connect-kubernetes-engine) glosses over:
Properly setting up workload identity, for which I found these links to be very helpful:
a) https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity
b) https://www.youtube.com/watch?v=l-nws1e4B8M
To connect to the DB you have to have your code use the DB host 127.0.0.1

Unable to get the service connection for Azure Container Registry in Azure DevOps (Release Pipeline)

I'm trying to deploy the docker container on Azure App Service from Azure DevOps services. I've pushed the docker image to Azure Container Registry. When I try to create the release definition, I could not able to find the service connection for Azure Container Registry. I have created the service connection for ACR but it's not showing up in the list in Azure DevOps portal.
When I selected 'Azure Container Repository' as the source type, the service connection is not visible in the drop down box. I'm using DockerHub as another option. It's displaying the service connection in the list.
The steps I followed to create the service connection for ACR:
Selected Docker Registry from the list.
Selected Azure Container Registry as Registry Type. Provided the subscription ID and the registry from ACR.
Provided the service connection name and saved.
UPDATE
I have created service connection for Azure Resource Manager using managed identity authentication by providing both subscription id and tenant id. I'm trying to use this connection in Artifact settings. I got the below error.
Variable with name endpoint.serviceprincipalid could not be found for the given service connection.
It's failing to pull the docker image from ACR. The logs from App service shows the pull access denied for the repository.
Service Connection problem solved but facing docker permission issue from App service
2020-02-10 12:31:11.781 INFO - Pulling image from Docker hub:
kbdockerregis/kbdockerimage:15
2020-02-10 12:31:14.406 ERROR - DockerApiException: Docker API responded with
status code=NotFound, response={"message":"pull access denied for
kbdockerregis/kbdockerimage, repository does not exist or may require 'docker
login': denied: requested access to the resource is denied"}
2020-02-10 12:31:14.408 ERROR - Image pull failed: Verify docker image
configuration and credentials (if using private repository)
2020-02-10 12:31:14.412 INFO - Stoping site kbapp1 because it failed during
startup.
When I selected 'Azure Container Repository' as the source type, the
service connection is not visible in the drop down box.
For this first issue, this because the api our system used is shown as below while you choosing ACR as release source:
https://dev.azure.com/{org}/{project}/_apis/serviceendpoint/endpoints?type=azurerm
You can see the parameters this api attached is type=azurerm. It only fetched the service connection which type is Azure Resource Manager. But Container Registry does not belong to this.
So, you'd better to create and use a service connection which type is Azure Resource Manager type.
Variable with name endpoint.serviceprincipalid could not be found for
the given service connection.
For this second issue, haven't get too much info from you (like checking stake trace). So based on my known, I'd suggest you changed the type from Managed Identity Authentication to Service Principal Authentication. Then follow this doc to config it.
This is more secure and can authorized firstly.
Service Principal Client id, it is the application id after you create the app in Azure app registrations:
Service principal key:
Stack overflow is a open forum and not secure to share some key info(especially Fiddler trace) which I need and used to investigate from backend. You'd better go here because you could choose Microsoft Only there. If possible, I can go that community and let that community's engineer show it to me. So that I could continue dig into it.

Azure Resource Manager Service Connection not connecting

We currently have one DevOps repository, with a functional CI/CD pipeline. We have another website hosted on a different instance (and different region) on Azure. We are trying to use our existing repo to deploy to the other Azure instance, but it is giving is the following message:
Failed to query service connection API: 'https://management.azure.com/subscriptions/c50b0601-a951-446c-b637-afa8d6bb1a1d?api-version=2016-06-01'. Status Code: 'Forbidden', Response from server: '{"error":{"code":"AuthorizationFailed","message":"The client '2317de35-b2c2-4e32-a922-e0d076a429f5' with object id '2317de35-b2c2-4e32-a922-e0d076a429f5' does not have authorization to perform action 'Microsoft.Resources/subscriptions/read' over scope '/subscriptions/c50b0601-a951-446c-b637-afa8d6bb1a1d'."}}'
I have tried all of the recommended trouble-shooting, making sure that the user is in a Global Administrator role and what-not, but still not luck. The secondary Azure subscription that we are hoping to push our builds to is a trial account. I'm not sure if it being a trial account matters.
I came across the same error. It turns out that, as the error message states, the service principal didn't have Read permission over the subscription. So the solution was to go to Azure Portal, select the subscription, select IAM and assign the role Reader to my service principal. Full explanation on here:
https://clydedz.medium.com/connecting-azure-devops-with-azure-46a908e3048f
I have the same problem. There are one repository and two instances of the application on the Azure portal. For the first instance, the subscription Pay-As-You-Go is used, and there were no problems for it when creating the service connection and CI/CD settings. For the second instance, a free subscription is used and when trying to create a new service connection (Azure Resource Manager) I get the same error.
I tried to do it with the permissions of Owner and Contributor
UPD: I was helped by the re-creation of the application in the azure portal
https://learn.microsoft.com/en-ca/azure/active-directory/develop/howto-create-service-principal-portal
Another option would be to save without verification if the Service Principle will not require permissions at the Subscription level. Like for example providing access to a Keyvault.
Check if the service connection for the second instance is correctly added in project settings:

API: sqs:CreateQueue always ACCESS DENIED

I'm trying to create an sqs queue with cloudformation but I keep getting this error in the console.
API: sqs:CreateQueue Access to the resource https://sqs.us-east-1.amazonaws.com/ is denied.
Obviously I'm missing some sort of permission. This guide didn't really specify how I could resolve this.
Here's the code I made:
AWSTemplateFormatVersion: "2010-09-09"
Resources:
MyQueue:
Type: AWS::SQS::Queue
Properties:
FifoQueue: false
QueueName: sqs-test
ReceiveMessageWaitTimeSeconds: 20
RedrivePolicy:
deadLetterTargetArn:
Fn::GetAtt:
- "MyDLQ"
- "Arn"
maxReceiveCount: 4
Tags:
-
Key: "ProjectName"
Value: "project-x"
MyDLQ:
Type: AWS::SQS::Queue
Properties:
FifoQueue: false
QueueName: sqs-dlq-test
I'm trying to understand this doc. But I'm not sure how I could attach a policy to allow creation of queues. Someone please give me a full example.
tyron's comment on your question is spot on. Check permissions of the user executing the CloudFormation. If you're running commands directly, this is usually pretty easy to check. In some cases, you may be working with a more complicated environment with automation.
I find the best way to troubleshoot permissions in an automated world is via CloudTrail. After any API call has failed, whether from the CLI, CloudFormation, or another source, you can look up the call in CloudTrail.
In this case, searching for "Event Name" = "CreateQueue" in the time range of the failure will turn up a result with details like the following:
Source IP Address; this field may say something like cloudformation.amazonaws.com, or the IP of your machine/office. Helpful when you need to filter events based on the source.
User name; In my case, this was the EC2 instance ID of the agent running the CFN template.
Access Key ID; For EC2 instances, this is likely a set of temporary access credentials, but for a real user, it will show you what key was used.
Actual event data; Especially helpful for non-permissions errors, the actual event may show you errors in the request itself.
In my case, the specific EC2 instance that ran automation was out of date and needed to be updated to use the correct IAM Role/Instance Profile. CloudTrail helped me track that down.
If you are using AWS CodePipeline (where you may be using AWS CodeBuild to run & deploy your CloudFormation stack), remember your CodeBuild role (created under IAM Roles) must have the correct permissions.
You can identify which role is being used & attach required policies -
Open CodeBuild Project
Go to Build Details > Environment > Service Role
Open Service Role (hyperlinked)
Add SQS to role policies