Kubernetes is unable to launch container using image from private gcr.io container registry.
The error says "ImagePullBackOff".
Both Kubernetes and Container registry are in the same Google Cloud project.
The issue was with permissions.
It turns out that a service account that is used to launch Kubernetes needs to have reading permissions for Google Cloud Storage (this is important as the registry itself is using buckets to store images)
Exact details here
Related
I have a docker container that I want to run in IBM Cloud Functions (OpenWhisk), because I don't want the container to be publicly accessible I want to store it in the IBM Cloud Container Registry. For OpenWhisk to be able to access it I followed this tutorial for a similar problem: Access IAM-based services from IBM Cloud Functions
To summarize the steps:
create a IAM Namespace for Functions
give the namespace access to
the container registry
But sadly this doesn't solve the problem, I still get Failed to pull container image 'uk.icr.io/hvdveer/e2t-bridge:0.1.4'. And I can't really find any points where I could configure an API key or something.
How can I get this to work?
OpenWhisk (the underlying technology of IBM Cloud Functions) does not yet support authenticated access to a registry. As a consequence, AFAIK your use-case is currently not supported.
My application is deployed on a Kubernetes Cluster that runs on Google Cloud. I want to fetch logs written by my application using Stackdriver's REST APIs for logging.
From the above documentation page and this example, it seems that I can only list logs of a project, organization, billing account or folder.
I want to know if there are any REST APIs using which I can fetch logs of:
A pod in a Kubernetes Cluster running on Google Cloud
A VM instance running on Google Cloud
you need to request per MonitoredResource, which permits instance names and alike... for GCE that would be gce_instance while for GKE it would be container. individual pods of a cluster can be filtered by their cluster_name & pod_id; the documentation for resource-list describes it:
container (GKE Container) A Google Container Engine (GKE) container instance.
project_id: The identifier of the GCP project associated with this resource, such as "my-project".
cluster_name: An immutable name for the cluster the container is running in.
namespace_id: Immutable ID of the cluster namespace the container is running in.
instance_id: Immutable ID of the GCE instance the container is running in.
pod_id: Immutable ID of the pod the container is running in.
container_name: Immutable name of the container.
zone: The GCE zone in which the instance is running.
I am trying to do some experiments with Kubernetes in google cloud.
I have docker image in google cloud registry and need to deploy that image to a kubernetes cluster.
Here are the steps I need to perform.
Create a Kubernetes cluster.
Copy the image from GCR and deploy to Kubernetes cluster.
Expose the cluster to internet via load balancer.
I know, it is possible to do via google cloud sdk cli. Is there way to do these steps via Java/node js?
There is a RESTful kubernetes-engine API:
https://cloud.google.com/kubernetes-engine/docs/reference/api-organization
e.g. create a cluster:
https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1beta1/projects.zones.clusters/create
The container registry should be standard docker APIs.
Both Java and Node have kubernetes clients:
https://github.com/kubernetes-client/java
https://github.com/godaddy/kubernetes-client
I'm following the Spring Cloud Data Flow "Getting Started" guide here (section 13): http://docs.spring.io/spring-cloud-dataflow-server-kubernetes/docs/current-SNAPSHOT/reference/htmlsingle/#_deploying_streams_on_kubernetes
I'm new to cloud computing, and I'm stuck at at the point where I should create a disk for a MySQL DB via gcloud:
gcloud compute disks create mysql-disk --size 200 --type pd-standard
Well, that throws:
The required property [project] is not currently set.
There is one thing that I quite don't understand yet (not my main question): Gcloud requires me to register a project un my google account. I wonder how my Google account (and cloud project there), the to-be-created disk image and the server are related to each another. How does this all relate to another?
Though my actual question is, how can I create the disk for the master sserver locally without using gcloud? Because I don't want a cloud server connected to my account on google.
Kubernetes does not manage any remote storage on its own. You can manage local storage by mounting an emptyDir volume.
Gcloud creates cloud bloc storage on your Google cloud account, and on Google Container Engine (GKE) Kubernetes is configured to be able to access these resources by ID, and can mount this type of volume into your Pod.
If you're not running Kubernetes on GKE, then you can't really mount a Google Cloud volume into your pod: the resources need to be managed by the same provider.
I have an app that I deploy as part of a stream with Spring Cloud Dataflow on a Kubernetes cluster. The Docker image for the app contains a VOLUME instruction and I'd like to specify a directory on the host to mount the volume to. (This is network-attached storage that all hosts in the cluster can access.)
I didn't see anything in KubernetesDeployerProperties.
Is this possible?
Sorry, no built-in support for volumes. Feel free to raise an issue here: https://github.com/spring-cloud/spring-cloud-deployer-kubernetes/issues