Detect Google Cloud Project Id from a container in Google hosted Kubernetes cluster - kubernetes

Detect Google Cloud Project Id from a container in Google hosted Kubernetes cluster.
When connecting to BigTable; I need to provide the Google Project Id. Is there a way to detect this automatically from within K8s?

In Python, you can find the project id this way:
import google.auth
_, PROJECT_ID = google.auth.default()
The original question didn't mention what programming language was being used, and I had the same question for Python.

You can use the metadata service. Example:
curl -H "Metadata-Flavor: Google" -w '\n' http://metadata.google.internal/computeMetadata/v1/project/numeric-project-id
This will work from any VM running on Google Compute Engine or Container Engine.
See https://cloud.google.com/compute/docs/storing-retrieving-metadata:
Google Compute Engine defines a set of default metadata entries that provide information about your instance or project. Default metadata is always defined and set by the server.
...
numeric-project-id The numeric project ID of the instance, which is not the same as the project name visible in the Google Cloud Platform Console. This value is different from the project-id metadata entry value.
project-id The project ID.

Google has some libraries for this too: ServiceOptions.getDefaultProjectId
https://googleapis.github.io/google-cloud-java/google-cloud-clients/index.html
https://github.com/googleapis/google-cloud-java/blob/master/google-cloud-clients/google-cloud-core/src/main/java/com/google/cloud/ServiceOptions.java
https://github.com/googleapis/google-cloud-java/tree/master/google-cloud-clients/google-cloud-core

Related

Cannot deploy Kubeflow on GCP: tells me to enable APIs that are already enabled

I am trying to install Kubeflow on Google Cloud Platform (GCP) and Kubernetes Engine (GKE), following the GCP deployment guide.
I created a GCP project of which I am the owner, I enabled billing, set up OAuth credentials and enabled the following APIs:
Compute Engine API
Kubernetes Engine API
Identity and Access Management (IAM) API
Deployment Manager API
Cloud Resource Manager API
Cloud Filestore API
AI Platform Training & Prediction API
However, when I want to deploy Kubeflow using the UI, I get the following error:
So I doublechecked and those APIs are already enabled:
The log messages at the bottom of the screen are:
2020-03-0614:14:04.629: Getting enabled services for project <projectname>..
2020-03-0614:14:16.909: Could not configure communication with GCP, exiting
The Could not configure communication with GCP, exiting is triggered when _enableGcpServices() fails.
The line Getting enabled services for project ... is printed but not the line Proceeding with project number: ..., so the error must be triggered somewhere in the block of code between those lines.
The call to Gapi.cloudresourcemanager.getProjectNumber(project) has its own try/catch with a slightly different error message and title (only talks about the cloud resource manager API, not the IAM API), so I assume it is the call to Gapi.getSignedInEmail() that fails??
I'd suggest having a look at the service management API, IAM service credentials API and cloud identity aware proxy API possibly. I've only used the CLI install tool previously and not run into these problems, but you might require these services for the IAP deployment?
I faced the same issue and was able to solve by correcting the project id.
Make sure that the project id on the UI form is specified correctly as it is on the GCP project - and that it does not have any leading or trailing spaces if you copy pasted from the GCP project details like I did.
I had the same issue. I was using in trial. Seems they allow a limited project to use billing account at same time. So I shut down unused ones . Went to Billing-->my projects. Disabled unused with 3 dots. Then tried to enable the billing account for current project. It worked.

How to get programmatically the current GKE project id from one of its clusters?

I'd like to get the current GKE project id from within one of its clusters via the Java client or the GCloud API itself.
I'm running java containers in a GKE cluster of a specific Google Cloud project
I initialize the ClusterManagerClient with the appropriate ClusterManagerSettings
-> Is it possible to fetch this specific project id with this client?
(I'm expecting that there would be a global context within each GKE cluster where we could know the current project we're running on).
Thank you
As John Hanley mentioned in his comment above, you can use the instance metadata on the node in your cluster to determine the project that the node is a part of. The easiest way to see it is to use curl from a shell (either on the node or in a container).
If you want the project name, it can be seen at:
curl "http://metadata.google.internal/computeMetadata/v1/project/project-id" -H "Metadata-Flavor: Google"
And if you want the project number, it can be seen at:
curl "http://metadata.google.internal/computeMetadata/v1/project/numeric-project-id" -H "Metadata-Flavor: Google"
This isn't part of the container API surface, so the ClusterManagerClient isn't the right API client to use. You need to create a client to fetch the instance metadata, which I would expect might be part of the compute client libraries, or you can just make a local HTTP request if you add the right headers (as shown above) since you don't need any special client authentication / authorization to access the local metadata.

Google Cloud Storage 500 Internal Server Error 'Google::Cloud::Storage::SignedUrlUnavailable'

Trying to get Google Cloud Storage working on my app. I successfully saved an image to a bucket, but when trying to retrieve the image, I receive this error:
GCS Storage (615.3ms) Generated URL for file at key: 9A95rZATRKNpGbMNDbu7RqJx ()
Completed 500 Internal Server Error in 618ms (ActiveRecord: 0.2ms)
Google::Cloud::Storage::SignedUrlUnavailable (Google::Cloud::Storage::SignedUrlUnavailable):
Any idea of what's going on? I can't find an explanation for this error in their documentation.
To provide some explanation here...
Google App Engine (as well as Google Compute Engine, Kubernetes Engine, and Cloud Run) provides "ambient" credentials associated with the VM or instance being run, but only in the form of OAuth tokens. For most API calls, this is sufficient and convenient.
However, there are a small number of exceptions, and Google Cloud Storage is one of them. Recent Storage clients (including the google-cloud-storage gem) may require a full service account key to support certain calls that involve signed URLs. This full key is not provided automatically by App Engine (or other hosting environments). You need to provide one yourself. So as a previous answer indicated, if you're using Cloud Storage, you may not be able to depend on the "ambient" credentials. Instead, you should create a service account, download a service account key, and make it available to your app (for example, via the ActiveStorage configs, or by setting the GOOGLE_APPLICATION_CREDENTIALS environment variable).
I was able to figure this out. I had been following Rail's guide on Active Storage with Google Storage Cloud, and was unclear on how to generate my credentials file.
google:
service: GCS
credentials: <%= Rails.root.join("path/to/keyfile.json") %>
project: ""
bucket: ""
Initially, I thought I didn't need a keyfile due to this sentence in Google's Cloud Storage authentication documentation:
If you're running your application on Google App Engine or Google
Compute Engine, the environment already provides a service account's
authentication information, so no further setup is required.
(I am using Google App Engine)
So I commented out the credentials line and started testing. Strangely, I was able to write to Google Cloud Storage without issue. However, when retrieving the image I would receive the 500 server error Google::Cloud::Storage::SignedUrlUnavailable.
I fixed this by generating my private key and adding it to my rails app.
Another possible solution as of google-cloud-storage gem version 1.27 in August 2020 is documented here. My Google::Auth.get_application_default as in the documentation returned an empty object, but using Google::Cloud::Storage::Credentials.default.client instead worked.
If you get Google::Apis::ClientError: badRequest: Request contains an invalid argument response when signing check that you have dash in the project name in the signing URL (i.e projects/-/serviceAccounts explicit project name in the path is deprecated and no longer valid) and that you have "issuer" string correct, as the full email address identifier of the service account not just the service account name.
If you get Google::Apis::ClientError: forbidden: The caller does not have permission verify the roles your Service Account have:
gcloud projects get-iam-policy <project-name>
--filter="bindings.members:<sa_name>"
--flatten="bindings[].members" --format='table(bindings.role)'
=> ROLE
roles/iam.serviceAccountTokenCreator
roles/storage.admin
serviceAccountTokenCreator is required to call the signBlob service, and you need storage.admin to have ownership of the thing you need to sign. I think these are project global rights, I couldn't get it to work with more fine grained permissions unfortunately (i.e one app is admin for a certain Storage bucket)

How can I look up a valid label name for use as declared service in IBM Bluemix manifest.yml

Is there a valid list of label names for the "declared services" (https://github.com/IBM/watson-calorie-counter/blob/master/manifest.yml#L4) that can be used in a bluemix deployment documented somewhere?
This blog post describes how to get the label name for a specific service with the UI: https://www.ibm.com/blogs/bluemix/2016/01/deploy-to-bluemix-button-example/ -- But I was hoping there was a single source of truth documented somewhere, or I could find the information programmatically with the cf or bx CLIs.
What you are looking for is the documentation on how the declared-services section work. It is an IBM extension to regular Cloud Foundry manifest files. The extension is described in the IBM Cloud documentation for Continuous Delivery. That section also has details on how to look up the service names, labels:
Declared services, a manifest extension which creates or looks for the
required or optional services that are expected to be set up before
the app is deployed, such as a data cache service. You can find a list
of the eligible Bluemix services, labels, and plans by using the CF
Command Line Interface command cf marketplace or by browsing the Bluemix catalog.
So you would look up how a service is named and what plans are offered.

Google Cloud Platform: Logging in to GCP from commandline

I was sure it will be simple but couldn't find any documentation or resolution.
I'm trying to write a script using gcloud to perform some operations in my GCP instances.
Is there anyway to login/authenticate using gcloud via command line only?
Thanks
You have a couple of options here (depending on what exactly you're trying to do).
The first option is to log in using the --no-launch-browser option. This still requires interaction from a human user, but doesn't require a browser on the machine you're using:
> gcloud auth login --no-launch-browser
Go to the following link in your browser:
https://accounts.google.com/o/oauth2/auth?redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&prompt=select_account&response_type=code&client_id=32555940559.apps.googleusercontent.com&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fappengine.admin+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcompute&access_type=offline
Enter verification code: *********************************************
Saved Application Default Credentials.
You are now logged in as [user#example.com].
Your current project is [None]. You can change this setting by running:
$ gcloud config set project PROJECT_ID
The non-interactive option involves service accounts. The linked documentation explains them better than I can, but the short version of what you need to do is as follows:
Create a service account in the Google Developers Console. Make sure it has the appropriate "scopes" (these are permissions that determine what this service account can do. Download the corresponding JSON key file.
Run gcloud auth activate-service-account --key-file <path to key file>.
Note that Google Compute Engine VMs come with a slightly-different service account; the difference is described here.