How to Connect to Cloud SQL Through Kubernetes - kubernetes

This is driving me crazy, been trying to get this to work for 3 days now: I'm trying to connect a kubernetes deployment to my Cloud SQL database in GCP.
Here's what I've done so far:
Set up the cloud SQL proxy to work as a sidecar in my deployment
Created a GKE service account and attached it to my deployment
Bound the GKE service account to my GCP service account
Edited to the service account (to what I can tell) is owner permission
Yet what I run the deployment in GKE I still get:
the default Compute Engine service account is not configured with sufficient permissions to access the Cloud SQL API from this VM. Please create a new VM with Cloud SQL access (scope) enabled under "Identity and API access". Alternatively, create a new "service account key" and specify it using the -credential_file parameter
How can I fix this? I can't find any documentation on how to set up the service account to have the correct permissions with Cloud SQL or how to debug this issue. Every single tutorial I can find ends with "bind your service account" and then stops. Nothing that describes what permissions are needed, and nothing about how to actually connect to the DB from my code (how would my code talk to the proxy?).
Please help

FINALLY got it to work!
Two major pieces that the main article on this (cloud.google.com/sql/docs/mysql/connect-kubernetes-engine) glosses over:
Properly setting up workload identity, for which I found these links to be very helpful:
a) https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity
b) https://www.youtube.com/watch?v=l-nws1e4B8M
To connect to the DB you have to have your code use the DB host 127.0.0.1

Related

Create Service Connection from Azure DevOps to GCP Artifact Registry

Is there have any tutorials for creating a service account to GCP Artifact Registry?
i have tried this: https://cloud.google.com/architecture/creating-cicd-pipeline-vsts-kubernetes-engine
... but it is using GCP Container Registry
I do not imagine it should be much different but i keep on getting this:
##[error]denied: Permission "artifactregistry.repositories.downloadArtifacts" denied on resource
BUT the service account i created has the permissions needed (albeit those roles are in beta). i even gave it a very elevated role and still getting this.
when i created the service connect i followed these steps from the documentation linked above:
Docker Registry: https://gcr.io/PROJECT_ID, replacing PROJECT_ID with the name of your project (for example, https://gcr.io/azure-pipelines-test-project-12345).
Docker ID: _json_key
Password: Paste the content of azure-pipelines-publisher-oneline.json.
Service connection name: gcr-tutorial
Any advice on this would be appreciated.
I was having the same issue. As #Mexicoder points out the service account needs the ArtifactRegistryWriter permission. In addition, the following wasn't clear to me initially:
The service connection needs to be in the format: https://REGION-docker.pkg.dev/PROJECT-ID (where region is something like 'us-west2')
The repository parameter to the Docker task (Docker#2) needs to be in the form: PROJECT-ID/REPO/IMAGE
I was able to get it working with the documentation for Container Registry.
my issue was with the repository name.
ALSO the main difference when using Artifact Registry is the permission you need to give the IAM service account. Use ArtifactRegistryWriter. StorageAdmin will be useless.

Which service account to use to connect from GKE to cloud SQL?

I'm following the instructions on how to connect from GKE to Cloud SQL: https://cloud.google.com/sql/docs/postgres/connect-kubernetes-engine
It talks about YOUR-GSA-NAME. Google cloud creates "Compute Engine default service account" by default. Should I pick this one or create another service account for GKE only? What is the recommended way?
The Compute Engine default service account won't be able to connect to Cloud SQL out of the box, you'll have to add permissions to it (Cloud SQL Client role) for it to be able to connect.
I would create a new one however, as you likely don't want all GCE instances to be able to connect to Cloud SQL, and for permissions, best practice is to limit access. So just create a new SA (service account) with the Cloud SQL Client role (and any other permissions you might need GKE to access) and use that one.
This is all found in IAM -> Service Accounts in the console.

Cannot deploy Kubeflow on GCP: tells me to enable APIs that are already enabled

I am trying to install Kubeflow on Google Cloud Platform (GCP) and Kubernetes Engine (GKE), following the GCP deployment guide.
I created a GCP project of which I am the owner, I enabled billing, set up OAuth credentials and enabled the following APIs:
Compute Engine API
Kubernetes Engine API
Identity and Access Management (IAM) API
Deployment Manager API
Cloud Resource Manager API
Cloud Filestore API
AI Platform Training & Prediction API
However, when I want to deploy Kubeflow using the UI, I get the following error:
So I doublechecked and those APIs are already enabled:
The log messages at the bottom of the screen are:
2020-03-0614:14:04.629: Getting enabled services for project <projectname>..
2020-03-0614:14:16.909: Could not configure communication with GCP, exiting
The Could not configure communication with GCP, exiting is triggered when _enableGcpServices() fails.
The line Getting enabled services for project ... is printed but not the line Proceeding with project number: ..., so the error must be triggered somewhere in the block of code between those lines.
The call to Gapi.cloudresourcemanager.getProjectNumber(project) has its own try/catch with a slightly different error message and title (only talks about the cloud resource manager API, not the IAM API), so I assume it is the call to Gapi.getSignedInEmail() that fails??
I'd suggest having a look at the service management API, IAM service credentials API and cloud identity aware proxy API possibly. I've only used the CLI install tool previously and not run into these problems, but you might require these services for the IAP deployment?
I faced the same issue and was able to solve by correcting the project id.
Make sure that the project id on the UI form is specified correctly as it is on the GCP project - and that it does not have any leading or trailing spaces if you copy pasted from the GCP project details like I did.
I had the same issue. I was using in trial. Seems they allow a limited project to use billing account at same time. So I shut down unused ones . Went to Billing-->my projects. Disabled unused with 3 dots. Then tried to enable the billing account for current project. It worked.

Unable to create Kubernetes Cluster on IBM Bluemix

I have been trying to create a Kubernetes Cluster with my Bluemix account owner but always getting the following error upon creation:
IBM Cloud Infrastructure exception: Your account is currently prohibited from order 'Computing Instances'.
Any idea what the issue is? There seems to be no direct way to getting support from Public Bluemix to address this issue. We opened a ticket but it has not been addressed.
You should contact IBM Bluemix Support for this kind of question. Before you login to the Bluemix Console, there is a Support link.
From the look of the exception. It seen like you are trying to create a "second" kubernetes cluster. If this is what you are trying to do, you will need a SoftLayer account; or your ID in your SoftLayer account is not setup properly.
You need admin rights to create clusters in Bluemix. Just makes sure that you get the admin status and it should work for you. The normal permissions granted to you are that of an user. Hope this helps

Node-Red on Bluemix - how to access the Node server files?

I created a Node-Red flow on Bluemix, did some development and it was working ok for a few weeks. Suddenly the server won't start and logs "[Error: No cloudant service found]". The cloudant db credentials in VCAP look ok to me. How can I look at other files which are used by the Node server to setup and run? I don't see any way to access them in Bluemix or via cf.
Node-RED looks for a cloudant instance with the name <your-app-name>.cloudantNoSQLDB as that is what the boilerplate/quick-start deploy processes uses when deploying your instance.
One explanation for it not finding the bound cloundant instance is if you have renamed your app.
The specific code deployed is available: https://github.com/node-red/node-red-bluemix-starter
The expected name of the cloudant instance is generated here: https://github.com/node-red/node-red-bluemix-starter/blob/25f216a61fba182c4f8d2594124e2e4bbbebc3a6/bluemix-settings.js#L80