Cannot deploy Kubeflow on GCP: tells me to enable APIs that are already enabled - kubernetes

I am trying to install Kubeflow on Google Cloud Platform (GCP) and Kubernetes Engine (GKE), following the GCP deployment guide.
I created a GCP project of which I am the owner, I enabled billing, set up OAuth credentials and enabled the following APIs:
Compute Engine API
Kubernetes Engine API
Identity and Access Management (IAM) API
Deployment Manager API
Cloud Resource Manager API
Cloud Filestore API
AI Platform Training & Prediction API
However, when I want to deploy Kubeflow using the UI, I get the following error:
So I doublechecked and those APIs are already enabled:
The log messages at the bottom of the screen are:
2020-03-0614:14:04.629: Getting enabled services for project <projectname>..
2020-03-0614:14:16.909: Could not configure communication with GCP, exiting
The Could not configure communication with GCP, exiting is triggered when _enableGcpServices() fails.
The line Getting enabled services for project ... is printed but not the line Proceeding with project number: ..., so the error must be triggered somewhere in the block of code between those lines.
The call to Gapi.cloudresourcemanager.getProjectNumber(project) has its own try/catch with a slightly different error message and title (only talks about the cloud resource manager API, not the IAM API), so I assume it is the call to Gapi.getSignedInEmail() that fails??

I'd suggest having a look at the service management API, IAM service credentials API and cloud identity aware proxy API possibly. I've only used the CLI install tool previously and not run into these problems, but you might require these services for the IAP deployment?

I faced the same issue and was able to solve by correcting the project id.
Make sure that the project id on the UI form is specified correctly as it is on the GCP project - and that it does not have any leading or trailing spaces if you copy pasted from the GCP project details like I did.

I had the same issue. I was using in trial. Seems they allow a limited project to use billing account at same time. So I shut down unused ones . Went to Billing-->my projects. Disabled unused with 3 dots. Then tried to enable the billing account for current project. It worked.

Related

permission error: service account don't have access to cloud-ml platform

I am running Kubeflow pipeline(docker approach) and cluster uses the endpoint to navigate to the dashboard. The Clusters is created followed by the instructions mentioned in this link Deploy Kubeflow. Everything is successfully created and the cluster generated the endpoints and its working perfectly.
Endpoint link would be something like this https://appname.endpoints.projectname.cloud.goog.
Every workload of the pipeline is working fine except the last one. In the last workload, I am trying to submit a job to the cloud-ml engine. But it logs shows that the application has no access to the project. Here is the full image of the log.
ERROR:
(gcloud.ml-engine.versions.create) PERMISSION_DENIED: Request had
insufficient authentication scopes.
ERROR:
(gcloud.ml-engine.jobs.submit.prediction) User
[clustername#project_name.iam.gserviceaccount.com]
does not have permission to access project [project_name]
(or it may not exist): Request had insufficient authentication scopes.
From the logs, it's clear that this service account doesn't have access to the project itself. However, I tried to give access for Cloud ML Service to this service account but still, it's throwing the same error.
Any other ways to give Cloud ML service credentials to this application.
Check two things:
1) GCP IAM: if clustername-user#projectname.iam.gserviceaccount.com has ML Engine Admin permission.
2) Your pipeline DSL: if the cloud-ml engine step calls apply(gcp.use_gcp_secret('user-gcp-sa')), e.g. https://github.com/kubeflow/pipelines/blob/ea07b33b8e7173a05138d9dbbd7e1ce20c959db3/samples/tfx/taxi-cab-classification-pipeline.py#L67

IBM Cloud API Connect Secure Gateway

Recently, I started seeing an issue when trying to setup secure gateway within API Connect on IBM Cloud, I previously had it working but looks like they changed this wizard interface and its broken since then
here is what I did to recreate the issue:
Setup new APIC instance on IBM Cloud
API connect Manager UI > Admin > Secure Gateways > Add (name & save)
once created, in Secure Gateway Clients section, click on +Set Up
I see no ID or Token generated
no matter what type of client I choose (DataPower, Docker or Installer)
Anyone facing the same issue?
Empty ID and Token when trying to setup Secure Gateway Client
turned out that creating SecureGateway from within the APIC is deprecated feature anyway.
you will need to create standalone SecureGateway resource on Bluemix and call it from your API assembly.
here is the instructions
https://www.ibm.com/support/knowledgecenter/en/SSFS6T/com.ibm.apic.apionprem.doc/task_api_secure_gateway.html

Error while attempting to run IBM TKE command with Hyper Protect Crypto Services

I tried to setup the Hyper Protect Crypto Services in IBM Cloud. After I provisioned an instance, setup the IBM CLI, I attempted to run some of the TKE commands from the getting started page. But when I run this command, it fails with:
ibmcloud tke domains
FAILED
API endpoint not recognized when determining target URLs.
it looks like you might not be connected to the correct API Endpoint. You can run 'ibmcloud api' to see what your current API Endpoint is set to. Hyper Protect Crypto is currently only available in the us-south region, so the endpoint you need would be https://api.ng.bluemix.net.
To set this region you would issue 'ibmcloud api https://api.ng.bluemix.net' The ng part determines the region you are in, so later on if you are in dallas or australia, it would be a different endpoint name.

How do we register a PCF Service Broker as reachable from two spaces in the same PCF Org (with org admin permissions)?

How do I register a Pivotal Cloud Foundry Service Broker to make it accessible from multiple spaces within the same Organization, if I have Org-level permissions?
We tried to register a PCF Service broker (cf create-service-broker ...) in one space, then use it as a 'service instance' (cf create-service ...) in another space.
To illustrate the problem, consider the following work flow, from a HashiCorp Vault guide:
$ cf create-space examplespace
$ cf target -s examplespace
$ cf create-service-broker vault-broker "${AUTH_USERNAME}" "${AUTH_PASSWORD}" "https://${BROKER_URL}" --space-scoped
$ cf marketplace
service plans description
hashicorp-vault shared HashiCorp Vault Service Broker
# ...
$ cf create-service hashicorp-vault shared my-vault
The above works fine. The problem comes up when we have an app in a different space that we want to consume the HashiCorp Vault API:
$ cf target -s myappspace
$ cf bind-service my-app my-vault
This last part fails.
Also, now that I'm in the space myappspace, cf marketplace does **notCC show the new service broker.
Now, we have someone on our team with org-admin permissions.
I figured that we could just register the new service broker at the org level, using enable-service-access subcommand:
https://docs.cloudfoundry.org/services/access-control.html#enable-access-to-service-plans
$ cf enable-service-access my-vault -o WebOrg
This failed as well, because, even though he had Admin permissions for the entire org, he got a permission denied error.
If we then go on to registering the service broker in the second space, myappspace, we get a
All three of these methods failed, but there has to be some way to make a service from one space available to the others, within an Org., if I have administrative permissions for that PCF Org.
How?
A similar (although more specific) type of this issue is documented in the following two github issues for PCF's cloud_controller_ng repository:
https://github.com/cloudfoundry/cloud_controller_ng/issues/935
https://github.com/cloudfoundry/cloud_controller_ng/issues/837
I've done the following research:
https://docs.cloudfoundry.org/services/managing-service-brokers.html#register-broker
https://docs.cloudfoundry.org/services/access-control.html
https://docs.cloudfoundry.org/services/access-control.html#enable-access-to-service-plans
https://starkandwayne.com/blog/register-your-own-service-broker-with-any-cloud-foundry/
(We ran variations of every command on this page.)
The most similar of the existing questions on Stack Overflow were these:
WebSphere Message Broker - how to send a PCF message
Need help on Registering App on PCF with Spring Cloud Data Flow which is also on PCF
They don't seem to have much to do with name spacing issues in the PCF marketplace, or with PCF permissions management.
Note: At first I wanted to post this to serverfault.com, because this has more to do with the infrastructure for an application, rather than just programming. But, while serverfault.com has no tag for Pivotal Cloud Foundry, Stack Overflow has a pivotal-cloud-foundry tag with 588 uses, already.
How do I register a Pivotal Cloud Foundry Service Broker to make it accessible from multiple spaces within the same Organization, if I have Org-level permissions?
I don't think you can do this. You'd need to be a platform admin/operator. Then you'd need to register the service broker with the platform & mark that broker as accessible to select orgs & spaces. You could then create services instances & if the broker permits share them across spaces.
If you only have org/space permissions, you can only register the service broker with a specific space. It's then only visible in that space.
Without platform admin/operator permissions, I think the best you could do would be this:
register the broker in a specific space
create a service instance in that space
bind that to your apps in this space
create a service key for your app in the second space
switch to the second space
create a user provided service in that space and enter the service key info
Repeat steps 4-6 for each app in the second service (this ensure you get unique credentials per app, you could use one service key for all apps if you don't care about this).
Happy to be corrected, but I think that is the state of things as I write this.
Assuming you are using PCF 2.1 or above.
Service brokers must explicitly enable service instance sharing by setting a flag in their service-level metadata object. This allows service instances, of any service plan, to be shared across orgs and spaces.
This is from Enabling Service Instance Sharing
Looks like you have already followed the rest of steps from Sharing Service Intances

Google Cloud Storage 500 Internal Server Error 'Google::Cloud::Storage::SignedUrlUnavailable'

Trying to get Google Cloud Storage working on my app. I successfully saved an image to a bucket, but when trying to retrieve the image, I receive this error:
GCS Storage (615.3ms) Generated URL for file at key: 9A95rZATRKNpGbMNDbu7RqJx ()
Completed 500 Internal Server Error in 618ms (ActiveRecord: 0.2ms)
Google::Cloud::Storage::SignedUrlUnavailable (Google::Cloud::Storage::SignedUrlUnavailable):
Any idea of what's going on? I can't find an explanation for this error in their documentation.
To provide some explanation here...
Google App Engine (as well as Google Compute Engine, Kubernetes Engine, and Cloud Run) provides "ambient" credentials associated with the VM or instance being run, but only in the form of OAuth tokens. For most API calls, this is sufficient and convenient.
However, there are a small number of exceptions, and Google Cloud Storage is one of them. Recent Storage clients (including the google-cloud-storage gem) may require a full service account key to support certain calls that involve signed URLs. This full key is not provided automatically by App Engine (or other hosting environments). You need to provide one yourself. So as a previous answer indicated, if you're using Cloud Storage, you may not be able to depend on the "ambient" credentials. Instead, you should create a service account, download a service account key, and make it available to your app (for example, via the ActiveStorage configs, or by setting the GOOGLE_APPLICATION_CREDENTIALS environment variable).
I was able to figure this out. I had been following Rail's guide on Active Storage with Google Storage Cloud, and was unclear on how to generate my credentials file.
google:
service: GCS
credentials: <%= Rails.root.join("path/to/keyfile.json") %>
project: ""
bucket: ""
Initially, I thought I didn't need a keyfile due to this sentence in Google's Cloud Storage authentication documentation:
If you're running your application on Google App Engine or Google
Compute Engine, the environment already provides a service account's
authentication information, so no further setup is required.
(I am using Google App Engine)
So I commented out the credentials line and started testing. Strangely, I was able to write to Google Cloud Storage without issue. However, when retrieving the image I would receive the 500 server error Google::Cloud::Storage::SignedUrlUnavailable.
I fixed this by generating my private key and adding it to my rails app.
Another possible solution as of google-cloud-storage gem version 1.27 in August 2020 is documented here. My Google::Auth.get_application_default as in the documentation returned an empty object, but using Google::Cloud::Storage::Credentials.default.client instead worked.
If you get Google::Apis::ClientError: badRequest: Request contains an invalid argument response when signing check that you have dash in the project name in the signing URL (i.e projects/-/serviceAccounts explicit project name in the path is deprecated and no longer valid) and that you have "issuer" string correct, as the full email address identifier of the service account not just the service account name.
If you get Google::Apis::ClientError: forbidden: The caller does not have permission verify the roles your Service Account have:
gcloud projects get-iam-policy <project-name>
--filter="bindings.members:<sa_name>"
--flatten="bindings[].members" --format='table(bindings.role)'
=> ROLE
roles/iam.serviceAccountTokenCreator
roles/storage.admin
serviceAccountTokenCreator is required to call the signBlob service, and you need storage.admin to have ownership of the thing you need to sign. I think these are project global rights, I couldn't get it to work with more fine grained permissions unfortunately (i.e one app is admin for a certain Storage bucket)