Kubernetes unable to pull images from gcr.io - kubernetes

I am trying to setup Kubernetes for the first time. I am following the Fedora Manual installation guide: http://kubernetes.io/v1.0/docs/getting-started-guides/fedora/fedora_manual_config.html
I am trying to get the kubernetes addons running , specifically the kube-ui. I created the service and replication controller like so:
kubectl create -f cluster/addons/kube-ui/kube-ui-rc.yaml --namespace=kube-system
kubectl create -f cluster/addons/kube-ui/kube-ui-svc.yaml --namespace=kube-system
When i run
kubectl get events --namespace=kube-system
I see errors such as this:
Failed to pull image "gcr.io/google_containers/pause:0.8.0": image pull failed for gcr.io/google_containers/pause:0.8.0, this may be because there are no credentials on this request. details: (Authentication is required.)
How am i supposed to tell kubernetes to authenticate? This isnt covered in the documentation. So how do i fix this?

This happened due to a recent outage to gce storage as a result of which all of us went through this error while pulling images from gcr (which uses gce storage on the backend).
Are you still seeing this error ?

as the message says, you need credentials. Are you using Google Container Engine? Then you need to run
gcloud config set project <your-project>
gcloud config set compute/zone <your-zone, like us-central1-f>
gcloud beta container clusters get-credentials --cluster <your-cluster-name>
then your GCE cluster will have the credentials

Related

not able to pull image in POD ,getting ImagePullBackOff

I have my kubernetes nodes on different vms . each VM has 1 kubernetes node . in total I have 7 worker nodes
While trying to create POD on 1 node I get ImagepullBackOff error while docker pull on the same node is successful .
rest of the worker nodes are working fine
My docker registry is already set as insecure-regiry in daemon.json
pls help
ImagePullBackOff is almost always a typo in the image name. Make sure you specified the name correctly.
You need to describe the Pod using: kubectl describe pod <name>. It will show a more detailed message why pulling fails.
The kubernetes service account attached to the Pod is probably not able to pull the image. The service account must have the correct ImagePullSecrets.
When no service account is configured, it uses the default service account.
kubectl get sa -o yaml
This will give a list of ImagePullSecrets attached to this service account. See if you have created the correct secret and attached it to the service account.
resolved the issue.
issue was the Container runtime. affected nodes were using containrd as runtime and I setup these nodes to access my insecure registry for containerd . everything was OK after that.

Problems in connection when I try to connect GKE cluster using kubectl

I have a running cluster on Google Cloud Kubernetes engine and I want to access that using kubectl from my local system.
I tried installing kubectl with gcloud but it didn't worked. Then I installed kubectl using apt-get. When I try to see the version of it using kubectl version it says
Unable to connect to server EOF. I also don't have file ~/.kube/config, which I am not sure why. Can someone please tell me what I am missing here? How can I connect to the already running cluster in GKE?
gcloud container clusters get-credentials ... will auth you against the cluster using your gcloud credentials.
If successful, the command adds appropriate configuration to ~/.kube/config such that you can kubectl.

kubernetes can't pull certain images from ibm cloud registry

My pod does the following:
Warning Failed 21m (x4 over 23m) kubelet, 10.76.199.35 Failed to pull image "registryname/image:version1.2": rpc error: code = Unknown desc = Error response from daemon: unauthorized: authentication required
but other images will work. The output of
ibmcloud cr images
doesn't show anything different about the images that don't work. What could be going wrong here?
Given this is in kubenetes and you can see the image in ibmcloud cr images it most likely going to be a misconfiguration of your imagePullSecrets.
If you do kubectl get pod <pod-name> -o yaml you will be able to see the what imagePullSecrets are in scope for the pod and check if it looks correct (could be worth comparing it to a pod that is working).
It's worth noting that if your cluster is an instance in the IBM Cloud Kubernetes Service a default imagePullSecret for your account is added to the default namespace and therefore if you are running the pod in a different Kubenetes namespace you will need to do additional steps to make that work. This is a good place to start for information on this topic.
https://console.bluemix.net/docs/containers/cs_images.html#other
Looks like you haven't logged into the IBM Cloud Container registry. If you haven't done this yet, You should login with this command
ibmcloud cr login
Other issues can be
Docker is not installed.
The Docker client is not logged in to IBM Cloud Container Registry.
Your IBM Cloud access token might have expired.
You can find more troubleshooting instructions here

Google Cloud Kubernetes Engine troubleshooting

is anyone experiencing issue with the Google Kubernetes Engine (specifically in us-central1-b region - April 3rd 11pm EST)
I'm not able to to see my workload or any of my cluster configurations in the Google Kubernetes Engine section, it is intermittent (one minute is there then disappears)
Also can't connect to the Kubernetes section of the Google Cloud Console to check on my pods !!
No information of any issues as far as I can see. Could have been a network connection issue on your side, please check.
It appears to me that this is some issue about getting metadata from the cluster. You could check the following to help you troubleshoot or see if there is any underlying problems:
$ docker info
$ docker version
$ kubectl version
$ gcloud container clusters describe <your-cluster-name-) --zone <your-cluster-zone>
$ kubectl get componentstatuses

K8S dashboard not accessible after first cluster in GKE - GCP using console

Newbie setup :
Created First project in GCP
Created cluster with default, 3 nodes. Node version 1.7.6. cluster master version 1.7.6-gke.1.
Deployed aan application in a pod, per example.
Able to access "hello world" and the hostname, using the external-ip and the port.
In GCP / GKE webpage of my cloud console, clicked "discovery and loadbalancing", I was able to see the "kubernetes-dashboard" process in green-tick, but cannot access throught the IP listed. tried 8001,9090, /ui and nothing worked.
not using any cloud shell or gcloud commands on my local laptop. Everything is done on console.
Questions :
How can anyone access the kubernetes-dashboard of the cluster created in console?
docs are unclear, are the dashboard components incorporated in the console itself? Are the docs out of sync with GCP-GKE screens?
tutorial says run "kubectl proxy" and then to open
"http://localhost:8001/ui", but it doesnt work, why?
If you create a cluster with with version 1.9.x or greater, then u can access using tokens.
get secret.
kubectl -n kube-system describe secrets `kubectl -n kube-system get secrets | awk '/clusterrole-aggregation-controller/ {print $1}'` | awk '/token:/ {print $2}'
Copy secret.
kubectl proxy.
Open UI using 127.0.0.1:8001/ui. This will redirect to login page.
there will be two options to login, kubeconfig and token.
Select token and paste the secret copied earlier.
hope this helps
It seems to be an issue with the internal Kubernetes DNS service starting at version 1.7.6 on Google Cloud.
The solution is to access the dashboard at this endpoint instead:
http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
Github Issue links:
https://github.com/kubernetes/dashboard/issues/2368
https://github.com/kubernetes/kubernetes/issues/52729
The address of the dashboard service is only accessible from inside of the cluster. If you ssh into a node in your cluster, you should be able to connect to the dashboard. You can verify this by noticing that the address is within the services CIDR range for your cluster.
The dashboard in running as a pod inside of your cluster with an associated service. If you open the Workloads view you will see the kubernetes-dashboard deployment and can see the pod that was created by the deployment. I'm not sure which docs you are referring to, since you didn't provide a link.
When you run kubectl proxy it creates a secure connection from your local machine into your cluster. It works by connecting to your master and then running through a proxy on the master to the pod/service/host that you are connecting to via an ssh tunnel. It's possible that it isn't working because the ssh tunnels are not running; you should verify that your project has newly created ssh rules allowing access from the cluster endpoint IP address. Otherwise, if you could explain more about how it fails, that would be useful for debugging.
First :
gcloud container clusters get-credentials cluster-1 --zone my-zone --project my-project
Then find your kubernetes dashboard endpoint doing :
kubectl cluster-info
It will be something like https://42.42.42.42/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy
Install kube-dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
Run:
$ kubectl proxy
Access:
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login