IBM Cloud Functions "Failed to pull container image" - ibm-cloud

I have a docker container that I want to run in IBM Cloud Functions (OpenWhisk), because I don't want the container to be publicly accessible I want to store it in the IBM Cloud Container Registry. For OpenWhisk to be able to access it I followed this tutorial for a similar problem: Access IAM-based services from IBM Cloud Functions
To summarize the steps:
create a IAM Namespace for Functions
give the namespace access to
the container registry
But sadly this doesn't solve the problem, I still get Failed to pull container image 'uk.icr.io/hvdveer/e2t-bridge:0.1.4'. And I can't really find any points where I could configure an API key or something.
How can I get this to work?

OpenWhisk (the underlying technology of IBM Cloud Functions) does not yet support authenticated access to a registry. As a consequence, AFAIK your use-case is currently not supported.

Related

How to setup GKE Cluster and GKE pods has to communicate with cloud sql and cloud sql password stored on google cloud secret manager

I am trying to setup google kubernetes engine and its pods has to communicate with cloud sql database. The cloud sql database credentials are stored on google cloud secret manger. How pods will fetch credentials from secret manager and if secret manager credentials are updated than how pod will get update the new secret?
How to setup above requirement? Can you someone please help on the same?
Thanks,
Anand
You can make your deployed application get the secret (password) programmatically, from Google Cloud Secret Manager. You can find and example in many languages in the following link: https://cloud.google.com/secret-manager/docs/samples/secretmanager-access-secret-version
But before make sure that your GKE setup, more specifically your application is able to authenticate to Google Cloud Secret Manager. The following links can help you to choose the appropriate approche:
https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform
https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity
You can find information regarding that particular solution in this doc.
There are also good examples on medium here and here.
To answer your question regarding updating the secrets:
Usually secrets are pulled when the container is being created, but if you expect the credentials to change often (or for the pods to stick around for very long) you can adjust the code to update the secrets on every execution.

Calling Google APIs from within k8s cluster

According to "Finding credentials automatically" from Google Cloud:
...ADC (Application Default Credentials) is able to implicitly find the credentials as long as the GOOGLE_APPLICATION_CREDENTIALS environment variable is set, or as long as the application is running on Compute Engine, GKE, App Engine, or Cloud Functions.
Do I understand correctly that GOOGLE_APPLICATION_CREDENTIALS does not need to be present, if I want to call Google Cloud APIs in current Google Cloud project?
Let's say I'm in a container in a pod, what can I do from within acontainer to test that calling Google Cloud APIs just work™?
Check out https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity for how to up permissions for your pods. You have to do some mapping a so Google knows which pods get which perks, but after that it’s auto-magic as you mentioned. Otherwise calls will use the node-level google permissions which are generally minimal.

Service broker for kubernetes catalog, simple implementation for google cloud shell

I am trying to implement Azure osba service broker on google cloud shell to interact with google cloud kubernetes and Azure services, but i am not able to run it and always commands are ending in some error.
I have installed helm and service catalog also. Please suggest me any simple service broker for google cloud shell which i can implement easily for demo purpose. Can i use Google shell cloud MySQL ( GCP)? Please provide any information in form of website link or github.
You can use config connector to manage your Google Cloud Platform (GCP) resources through Kubernetes configuration as Google cloud platform service broker is deprecated.
This documentation will help you to get started with config connector by managing a cloud spanner instance. You can also refer to this repository that contains sample applications and resources like PubSub for use with Config Connector

Unable to create Kubernetes cluster on Google Kubernetes Engine

We are unable to create Kubernetes clusters in our Google Cloud project. It was working a few weeks ago. We keep getting the following error:
Google Compute Engine: Required 'compute.zones.get' permission for 'projects/<project code>/zones/us-central1-a'
However, the role assigned to the user trying to create the cluster is Project/Owner, and the service account selected when creating the cluster has Project/Editor, which includes the compute.zones.get permission. Even if I give the service account Project/Owner it still gives the same error.
EDIT
When trying to create the cluster with gcloud we get a different (similar) error:
Google Compute Engine: Required 'compute.networks.get' permission for 'projects/<project code>/global/networks/default'
Not sure what went wrong, but the fix was to disable all the compute services, and then re-initialising the Kubernetes service.
You lack the cloudservices service account in IAM. A current workaround for this issue is to re-enable the Google Cloud Compute Engine API.
I wasn't able to disable the Compute API due to an apparent dependency loop, but creating a new GCP project "fixed" this for me.

Create disk on Kubernetes without gcloud

I'm following the Spring Cloud Data Flow "Getting Started" guide here (section 13): http://docs.spring.io/spring-cloud-dataflow-server-kubernetes/docs/current-SNAPSHOT/reference/htmlsingle/#_deploying_streams_on_kubernetes
I'm new to cloud computing, and I'm stuck at at the point where I should create a disk for a MySQL DB via gcloud:
gcloud compute disks create mysql-disk --size 200 --type pd-standard
Well, that throws:
The required property [project] is not currently set.
There is one thing that I quite don't understand yet (not my main question): Gcloud requires me to register a project un my google account. I wonder how my Google account (and cloud project there), the to-be-created disk image and the server are related to each another. How does this all relate to another?
Though my actual question is, how can I create the disk for the master sserver locally without using gcloud? Because I don't want a cloud server connected to my account on google.
Kubernetes does not manage any remote storage on its own. You can manage local storage by mounting an emptyDir volume.
Gcloud creates cloud bloc storage on your Google cloud account, and on Google Container Engine (GKE) Kubernetes is configured to be able to access these resources by ID, and can mount this type of volume into your Pod.
If you're not running Kubernetes on GKE, then you can't really mount a Google Cloud volume into your pod: the resources need to be managed by the same provider.