I ran eval $(minikube docker-env) then built a docker container. When I run docker images on my host I can see the image. When I run minikube ssh then docker images I can see it.
When I try to run it, the pod fails to launch. kubectl describe pod gives:
14m 3m 7 kubelet, minikube spec.containers{quoting-crab-customer-refresh-cache-cron} Normal Pulling pulling image "personalisation-customer:latest"
14m 3m 7 kubelet, minikube spec.containers{quoting-crab-customer-refresh-cache-cron} Warning Failed Failed to pull image "personalisation-customer:latest": rpc error: code = 2 desc = Error: image library/personalisation-customer:latest not found
14m 2s 66 kubelet, minikube Warning FailedSync Error syncing pod
14m 2s 59 kubelet, minikube spec.containers{quoting-crab-customer-refresh-cache-cron} Normal BackOff Back-off pulling image "personalisation-customer:latest"
My imagePullPolicy is Always.
What could be causing this? Other pods are working locally.
You aren't exactly pulling from your local registry, you are using your previously downloaded images or your locally builded, since you are specifying imagePullPolicy: Always this will always try to pull it from the registry.
Your image doesn't contain a specific docker registry personalisation-customer:latest for what docker will understand index.docker.io/personalisation-customer:latest and this is an image that doesn't exist in the public docker registry.
So you have 2 options imagePullPolicy: IfNotPresent or to upload the image to some registry.
The local Docker cache isn't a registry. Kubernetes tries to download the image from Dockerhub (the default registry), since you set iMagePullPolicy to Always. Set it to Never, so Kubernetes uses to local image.
I had the same issue. Issue was not mentioning the image version. I used
kubectl run testapp1 --image=<image> --image-pull-policy=IfNotPresent
instead of,
kubectl run testapp1 --image=<image>:<version> --image-pull-policy=IfNotPresent
For minikube, make the changes as follows:
Eval the docker env using:
eval $(minikube docker-env)
Build the docker image:
docket build -t my-image
Set the image name as only "my-image" in pod specification in your yaml file.
Set the imagePullPolicy to Never in you yaml file. Here is the example:
apiVersion:
kind:
metadata:
spec:
template:
metadata:
labels:
app: my-image
spec:
containers:
- name: my-image
image: "my-image"
imagePullPolicy: Never
Related
I'm running Kubeflow in a local machine that I deployed with multipass using these steps but when I tried running my pipeline, it got stuck with the message ContainerCreating. When I ran kubectl describe pod train-pipeline-msmwc-1648946763 -n kubeflow I found this on the Events part of the describe:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedMount 7m12s (x51 over 120m) kubelet, kubeflow-vm Unable to mount volumes for pod "train-pipeline-msmwc-1648946763_kubeflow(45889c06-87cf-4467-8cfa-3673c7633518)": timeout expired waiting for volumes to attach or mount for pod "kubeflow"/"train-pipeline-msmwc-1648946763". list of unmounted volumes=[docker-sock]. list of unattached volumes=[podmetadata docker-sock mlpipeline-minio-artifact pipeline-runner-token-dkvps]
Warning FailedMount 2m22s (x67 over 122m) kubelet, kubeflow-vm MountVolume.SetUp failed for volume "docker-sock" : hostPath type check failed: /var/run/docker.sock is not a socket file
Looks to me like there is a problem with my deployment, but I'm new to Kubernetes and can't figure out what I supposed to do right now. Any idea on how to solve this? I don't know if it helps but I'm pulling the containers from a private docker registry and I've set up the secret according to this.
You don't need to use docker. In fact the problem is with workflow-controller-configmap in kubeflow name space. You can edit it with
kubectl edit configmap workflow-controller-configmap -n kubeflow
and change containerRuntimeExecutor: docker to containerRuntimeExecutor: pns. Also you can change some of the steps and install kubeflow 1.3 in mutlitpass 1.21 rather than 1.15. Do not use kubelfow add-on (at least didn't work for me). You need kustomize 3.2 to create manifests as they mentioned in https://github.com/kubeflow/manifests#installation.
There was one step missing which is not mentioned in the tutorial, which is, I have to install docker. I've installed docker, rebooted the machine, and now everything works fine.
Trying to install a sample container App using Pod in my local environment
I'm using kubernates cluster coming with docker desktop.
I'm creating the Pod using bellow command with the YML file
kubectl create -f test_image_pull.yml
apiVersion: v1
kind: Pod
metadata:
# value must be lower case
name: sample-python-web-app
spec:
containers:
- name: sample-hello-world
image: local/sample:latest
imagePullPolicy: Always
command: ["echo", "SUCCESS"]
docker file used to build the image and this container running without any issue if u run with docker run
# Use official runtime python
FROM python:2.7-slim
# set work directory to app
WORKDIR /app
# Copy current directory
COPY . /app
# install needed packages
RUN pip install --trusted-host pypi.python.org -r requirement.txt
# Make port 80 available to outside container
EXPOSE 80
# Define environment variable
ENV NAME World
# Run app.py when the container launches
CMD ["python" , "app.py"]
from flask import Flask
from redis import Redis, RedisError
import os
import socket
#connect to redis
redis = Redis(host="redis", db=0, socket_connect_timeout=2, socket_timeout=2)
app = Flask(__name__)
#app.route("/")
def hello():
try:
visits = redis.incr("counter")
except RedisError:
visits = "<i>cannot connect to Redis, counter disabled</i>"
html = "<h3>Hello {name}!</h3>" \
"<b>Hostname:</b> {hostname}<br/>" \
"<b>Visits:</b> {visits}"
return html.format (
name=os.getenv("NAME", "world"),
hostname=socket.gethostname(),
visits=visits
)
if __name__ == "__main__":
app.run(host="0.0.0.0", port=80)
Flask
Redis
Once I describe the pod it shows me below error
kubectl describe pod sample-python-web-app
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m25s default-scheduler Successfully assigned default/sample-python-web-app to docker-desktop
Normal Pulling 97s (x4 over 3m22s) kubelet, docker-desktop Pulling image "local/sample:latest"
Warning Failed 94s (x4 over 3m17s) kubelet, docker-desktop Failed to pull image "local/sample:latest": rpc error: code = Unknown desc = Error response from daemon: pull access denied for local/sample, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Warning Failed 94s (x4 over 3m17s) kubelet, docker-desktop Error: ErrImagePull
Normal BackOff 78s (x6 over 3m16s) kubelet, docker-desktop Back-off pulling image "local/sample:latest"
Warning Failed 66s (x7 over 3m16s) kubelet, docker-desktop Error: ImagePullBackOff
Kubernetes pulls container images from a Docker Registry. Per the doc:
You create your Docker image and push it to a registry before
referring to it in a Kubernetes pod.
Moreover:
The image property of a container supports the same syntax as the
docker command does, including private registries and tags.
So, the way the image is referenced in the pod's spec - "image: local/sample:latest" - Kubernetes looks on Docker Hub for the image in repository named "local".
You can push the image to Docker Hub or some other external Docker Registry, public or private; you can host Docker Registry on the Kubernetes cluster; or, you can run a Docker Registry locally, in a container.
To run a Docker registry locally:
docker run -d -p 5000:5000 --restart=always --name registry registry:2
Next, find what is the IP address of the host - below I'll use 10.0.2.1 as an example.
Then, assuming the image name is "local/sample:latest", tag the image:
docker tag local/sample:latest 10.0.2.1:5000/local/sample:latest
...and push the image to the local registry:
docker push 10.0.2.1:5000/local/sample:latest
Next, change in pod's configuration YAML how the image is referenced - from
image: local/sample:latest
to
image: 10.0.2.1:5000/local/sample:latest
Restart the pod.
EDIT: Most likely the local Docker daemon will have to be configured to treat the local Docker registry as insecure. One way to configure that is described here - just replace "myregistrydomain.com" with the host's IP (e.g. 10.0.2.1). Docker Desktop also allows to edit daemon's configuration file through the GUI.
If you want to setup local repository for Kubernetes cluster, you might follow this guide .
I would recommend using Trow.io which is a image Management for Kubernetes to quickly create a registry that runs wihtin Kubernetes and provides a secure and fast way to get containers running on the cluster.
We're building an image management solution for Kubernetes (and possibly other orchestrators). At its heart is the Trow Registry, which runs inside the cluster, is simple to set-up and fully integrated with Kubernetes, including support for auditing and RBAC.
Why "Trow"
"Trow" is a word with multiple, divergent meanings. In Shetland folklore a trow is a small, mischievous creature, similar to the Scandanavian troll. In England, it is a old style of cargo boat that transported goods on rivers. Finally, it is an archaic word meaning "to think, believe, or trust". The reader is free to choose which interpretation they like most, but it should be pronounced to rhyme with "brow".
Whole installation process is described here.
I am trying to deploy a Pod in my v1.13.6-gke.6 k8s cluster.
The image that I'm using is pretty simple:
FROM scratch
LABEL maintainer "Bitnami <containers#bitnami.com>"
COPY rootfs /
USER 1001
CMD [ "/chart-repo" ]
As you can see, the user is set to 1001.
The cluster that I am deploying the Pod in has a PSP setup.
spec:
allowPrivilegeEscalation: false
allowedCapabilities:
- IPC_LOCK
fsGroup:
ranges:
- max: 65535
min: 1
rule: MustRunAs
runAsUser:
rule: MustRunAsNonRoot
So basically as per the rule: MustRunAsNonRoot rule, the above image should run.
But when I ran the image, I randomly run into :
Error: container has runAsNonRoot and image will run as root
So digging further, I got this pattern:
Every time I run the image with imagePullPolicy: IfNotPresent, I always run into the issue. Meaning every time I picked up a cached image, it gives the container has runAsNonRoot error.
Normal Pulled 12s (x3 over 14s) kubelet, test-1905-default-pool-1b8e4761-fz8s Container image "my-repo/bitnami/kubeapps-chart-repo:1.4.0-r1" already present on machine
Warning Failed 12s (x3 over 14s) kubelet, test-1905-default-pool-1b8e4761-fz8s Error: container has runAsNonRoot and image will run as root
BUT
Every time I run the image as imagePullPolicy: Always, the image SUCCESSFULLY runs:
Normal Pulled 6s kubelet, test-1905-default-pool-1b8e4761-sh5g Successfully pulled image "my-repo/bitnami/kubeapps-chart-repo:1.4.0-r1"
Normal Created 5s kubelet, test-1905-default-pool-1b8e4761-sh5g Created container
Normal Started 5s kubelet, test-1905-default-pool-1b8e4761-sh5g Started container
So I'm not really sure what all this is about. I mean just because the ImagePullPolicy is different, why does it wrongly setup a PSP rule?
Found out the issue. Its a known issue with k8s for 2 specific versions v1.13.6 & v1.14.2.
https://github.com/kubernetes/kubernetes/issues/78308
Hard to tell based on the description. Do you have multiple nodes in your cluster?
At first, if you are seeing the error with imagePullPolicy: IfNotPresent and not seeing it with imagePullPolicy: Always, it's most likely due to having different container images on the local node than in the container registry.
It could be that you have an older version of my-repo/bitnami/kubeapps-chart-repo locally with the same tag?
pod.yml
apiVersion: v1
kind: Pod
metadata:
name: hello-pod
labels:
zone: prod
version: v1
spec:
containers:
- name: hello-ctr
image: hello-world:latest
ports:
- containerPort: 8080
kubectl create -f pod.yml
kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-pod 0/1 CrashLoopBackOff 5 5m
Why CrashLoopBackOff?
In this case the expected behavior is correct. The hello-world container is meant to print some messages and then exit after completion. So this is why you are getting CrashLoopBackOff -
Kubernetes runs a pod - the container inside runs the expected commands and then exits.
Suddenly there is nothing running underneath - so the pod is ran again -> same thing happens and the number of restarts grows.
You can see that inkubectl describe pod where Terminated state is visible and the Reason for it is status Completed. If you would choose a container image which does not exit after completion the pod would be in running state.
The hello-world is actually exiting meaning Kubernetes thinks its crashing, and keeps restarting and exiting and going in CrashLoppBackOff. When you docker run your hello-world container you get this:
$ sudo docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
$
So, it's a one-time thing and not a service. Kubernetes has Jobs or CronJobs for these types of workloads.
We are having kubernetes cluster which is running on-premise & we having GCR private repository. So how we can access that private repository to my on-premise kubernetes cluster, As I know we can do using gcloud-sdk but it won't be possible to install gcloud-sdk on every node of kubernetes cluster.
We used to deploy pods on azure AKS cluster and images used to be from GCR.
these are the steps we follow.
Create a service account in gcloud with permissions to gcr.
Create keys for the service account.
Add kubectl secret.
Use secret in yaml
gcloud iam service-accounts keys create gcr-docker-cred.json --iam-account=service-account-name#project-id.iam.gserviceaccount.com
Add kubectl secret.
kubectl create secret docker-registry gcriosecret --docker-server=https://gcr.io --docker-username=_json_key --docker-email=user#example.com --docker-password="$(cat gcr-docker-cred.json)"
Use secret in yaml
imagePullSecrets:
- name: gcriosecret
this blog might be a good help
Kubernetes clusters running on GKE or GCE have native support for accessing the container registry and need no further configuration.
As you mentioned that you are running an on premises cluster you are not running any of these and only use the container registry from GCP, so, while I haven't had the chance to test this (I don't have a cluster outside Google Cloud) the process shouldn't be different than the process for pulling an image from a private registry.
In your case you can create a secret with the auth credentials for the gcr.io registry like this:
kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
your-registry-server in this case will probably be https://gcr.io/[your-project-id]
When you have created the secret named regcred you can configure pods to use it for pulling the desired image from the registry adding an imagePullSecrets like this :
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: [The image you want to pull]
imagePullSecrets:
- name: regcred
Then you can test if the image is correctly pulled by deploying this pod:
kubectl create -f [your pod yaml]
Waiting for the pod to be created and then describing the pod with kubectl describe pod private-reg and seeing an event sequence similar to:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m default-scheduler Successfully assigned private-reg to gke-cluster-22-default-pool-e7830b6c-pxmt
Normal Pulling 4m kubelet, gke-cluster-22-default-pool-e7830b6c-pxmt pulling image "gcr.io/XXX/XXX:latest"
Normal Pulled 3m kubelet, gke-cluster-22-default-pool-e7830b6c-pxmt Successfully pulled image ""gcr.io/XXX/XXX:latest"
Normal Created 3m kubelet, gke-cluster-22-default-pool-e7830b6c-pxmt Created container
Normal Started 3m kubelet, gke-cluster-22-default-pool-e7830b6c-pxmt Started container