Trying to install a sample container App using Pod in my local environment
I'm using kubernates cluster coming with docker desktop.
I'm creating the Pod using bellow command with the YML file
kubectl create -f test_image_pull.yml
apiVersion: v1
kind: Pod
metadata:
# value must be lower case
name: sample-python-web-app
spec:
containers:
- name: sample-hello-world
image: local/sample:latest
imagePullPolicy: Always
command: ["echo", "SUCCESS"]
docker file used to build the image and this container running without any issue if u run with docker run
# Use official runtime python
FROM python:2.7-slim
# set work directory to app
WORKDIR /app
# Copy current directory
COPY . /app
# install needed packages
RUN pip install --trusted-host pypi.python.org -r requirement.txt
# Make port 80 available to outside container
EXPOSE 80
# Define environment variable
ENV NAME World
# Run app.py when the container launches
CMD ["python" , "app.py"]
from flask import Flask
from redis import Redis, RedisError
import os
import socket
#connect to redis
redis = Redis(host="redis", db=0, socket_connect_timeout=2, socket_timeout=2)
app = Flask(__name__)
#app.route("/")
def hello():
try:
visits = redis.incr("counter")
except RedisError:
visits = "<i>cannot connect to Redis, counter disabled</i>"
html = "<h3>Hello {name}!</h3>" \
"<b>Hostname:</b> {hostname}<br/>" \
"<b>Visits:</b> {visits}"
return html.format (
name=os.getenv("NAME", "world"),
hostname=socket.gethostname(),
visits=visits
)
if __name__ == "__main__":
app.run(host="0.0.0.0", port=80)
Flask
Redis
Once I describe the pod it shows me below error
kubectl describe pod sample-python-web-app
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m25s default-scheduler Successfully assigned default/sample-python-web-app to docker-desktop
Normal Pulling 97s (x4 over 3m22s) kubelet, docker-desktop Pulling image "local/sample:latest"
Warning Failed 94s (x4 over 3m17s) kubelet, docker-desktop Failed to pull image "local/sample:latest": rpc error: code = Unknown desc = Error response from daemon: pull access denied for local/sample, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Warning Failed 94s (x4 over 3m17s) kubelet, docker-desktop Error: ErrImagePull
Normal BackOff 78s (x6 over 3m16s) kubelet, docker-desktop Back-off pulling image "local/sample:latest"
Warning Failed 66s (x7 over 3m16s) kubelet, docker-desktop Error: ImagePullBackOff
Kubernetes pulls container images from a Docker Registry. Per the doc:
You create your Docker image and push it to a registry before
referring to it in a Kubernetes pod.
Moreover:
The image property of a container supports the same syntax as the
docker command does, including private registries and tags.
So, the way the image is referenced in the pod's spec - "image: local/sample:latest" - Kubernetes looks on Docker Hub for the image in repository named "local".
You can push the image to Docker Hub or some other external Docker Registry, public or private; you can host Docker Registry on the Kubernetes cluster; or, you can run a Docker Registry locally, in a container.
To run a Docker registry locally:
docker run -d -p 5000:5000 --restart=always --name registry registry:2
Next, find what is the IP address of the host - below I'll use 10.0.2.1 as an example.
Then, assuming the image name is "local/sample:latest", tag the image:
docker tag local/sample:latest 10.0.2.1:5000/local/sample:latest
...and push the image to the local registry:
docker push 10.0.2.1:5000/local/sample:latest
Next, change in pod's configuration YAML how the image is referenced - from
image: local/sample:latest
to
image: 10.0.2.1:5000/local/sample:latest
Restart the pod.
EDIT: Most likely the local Docker daemon will have to be configured to treat the local Docker registry as insecure. One way to configure that is described here - just replace "myregistrydomain.com" with the host's IP (e.g. 10.0.2.1). Docker Desktop also allows to edit daemon's configuration file through the GUI.
If you want to setup local repository for Kubernetes cluster, you might follow this guide .
I would recommend using Trow.io which is a image Management for Kubernetes to quickly create a registry that runs wihtin Kubernetes and provides a secure and fast way to get containers running on the cluster.
We're building an image management solution for Kubernetes (and possibly other orchestrators). At its heart is the Trow Registry, which runs inside the cluster, is simple to set-up and fully integrated with Kubernetes, including support for auditing and RBAC.
Why "Trow"
"Trow" is a word with multiple, divergent meanings. In Shetland folklore a trow is a small, mischievous creature, similar to the Scandanavian troll. In England, it is a old style of cargo boat that transported goods on rivers. Finally, it is an archaic word meaning "to think, believe, or trust". The reader is free to choose which interpretation they like most, but it should be pronounced to rhyme with "brow".
Whole installation process is described here.
Related
I'm running Kubeflow in a local machine that I deployed with multipass using these steps but when I tried running my pipeline, it got stuck with the message ContainerCreating. When I ran kubectl describe pod train-pipeline-msmwc-1648946763 -n kubeflow I found this on the Events part of the describe:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedMount 7m12s (x51 over 120m) kubelet, kubeflow-vm Unable to mount volumes for pod "train-pipeline-msmwc-1648946763_kubeflow(45889c06-87cf-4467-8cfa-3673c7633518)": timeout expired waiting for volumes to attach or mount for pod "kubeflow"/"train-pipeline-msmwc-1648946763". list of unmounted volumes=[docker-sock]. list of unattached volumes=[podmetadata docker-sock mlpipeline-minio-artifact pipeline-runner-token-dkvps]
Warning FailedMount 2m22s (x67 over 122m) kubelet, kubeflow-vm MountVolume.SetUp failed for volume "docker-sock" : hostPath type check failed: /var/run/docker.sock is not a socket file
Looks to me like there is a problem with my deployment, but I'm new to Kubernetes and can't figure out what I supposed to do right now. Any idea on how to solve this? I don't know if it helps but I'm pulling the containers from a private docker registry and I've set up the secret according to this.
You don't need to use docker. In fact the problem is with workflow-controller-configmap in kubeflow name space. You can edit it with
kubectl edit configmap workflow-controller-configmap -n kubeflow
and change containerRuntimeExecutor: docker to containerRuntimeExecutor: pns. Also you can change some of the steps and install kubeflow 1.3 in mutlitpass 1.21 rather than 1.15. Do not use kubelfow add-on (at least didn't work for me). You need kustomize 3.2 to create manifests as they mentioned in https://github.com/kubeflow/manifests#installation.
There was one step missing which is not mentioned in the tutorial, which is, I have to install docker. I've installed docker, rebooted the machine, and now everything works fine.
I am using Mount propagation feature of Kubernetes to check the health of mount points of certain type. I create a daemonset and run a script which would do a simple ls on these mount points. I noticed that new mount points are not getting listed from the pods. Is this the expected behaviour.
volumeMounts:
- mountPath: /host
name: host-kubelet
mountPropagation: HostToContainer
volumes:
- name: host-kubelet
hostPath:
path: /var/lib/kubelet
Related Issue : hostPath containing mounts do not update as they change on the host #44713
In brief, Mount propagation allows sharing volumes mounted by a Container to other Containers in the same Pod, or even to other Pods on the same node.
Mount propagation of a volume is controlled by mountPropagation field in Container.volumeMounts. Its values are:
HostToContainer - one way propagation, from host to container. If you
mount anything inside the volume, the Container will see it there.
Bidirectional - In addition to propagation from host to container,
all volume mounts created by the Container will be propagated back to
the host, so all Containers of all Pods that use the same volume will
see it as well.
Based on documentation the Mount propagation feature is in alpha state for clusters v1.9, and going to be beta on v1.10
I've reproduced your case on kubernetes v1.9.2 and found that it completely ignores MountPropagation configuration parameter. If you try to check current state of the DaemonSet or Deployment, you'll see that this option is missed from the listed yaml configuration
$ kubectl get daemonset --export -o yaml
If you try to run just docker container with mount propagation option you may see it is working as expected:
docker run -d -it -v /tmp/mnt:/tmp/mnt:rshared ubuntu
Comparing docker container configuration with kubernetes pod container in the volume mount section, you may see that the last flag (shared/rshared) is missing in kubernetes container.
And that's why it happens in Google kubernetes clusters and may happen to clusters managed by other providers:
To ensure stability and production quality, normal Kubernetes Engine
clusters only enable features that are beta or higher. Alpha features
are not enabled on normal clusters because they are not
production-ready or upgradeable.
Since Kubernetes Engine automatically upgrades the Kubernetes control
plane, enabling alpha features in production could jeopardize the
reliability of the cluster if there are breaking changes in a new
version.
Alpha level features availability: committed to main kubernetes repo; appears in an official release; feature is disabled by default, but may be enabled by flag (in case you are able to set flags)
Before mount propagation can work properly on some deployments (CoreOS, RedHat/Centos, Ubuntu) mount share must be configured correctly in Docker as shown below.
Edit your Docker’s systemd service file. Set MountFlags as follows:
MountFlags=shared
Or, remove MountFlags=slave if present. Then restart the Docker daemon:
$ sudo systemctl daemon-reload
$ sudo systemctl restart docker
I ran eval $(minikube docker-env) then built a docker container. When I run docker images on my host I can see the image. When I run minikube ssh then docker images I can see it.
When I try to run it, the pod fails to launch. kubectl describe pod gives:
14m 3m 7 kubelet, minikube spec.containers{quoting-crab-customer-refresh-cache-cron} Normal Pulling pulling image "personalisation-customer:latest"
14m 3m 7 kubelet, minikube spec.containers{quoting-crab-customer-refresh-cache-cron} Warning Failed Failed to pull image "personalisation-customer:latest": rpc error: code = 2 desc = Error: image library/personalisation-customer:latest not found
14m 2s 66 kubelet, minikube Warning FailedSync Error syncing pod
14m 2s 59 kubelet, minikube spec.containers{quoting-crab-customer-refresh-cache-cron} Normal BackOff Back-off pulling image "personalisation-customer:latest"
My imagePullPolicy is Always.
What could be causing this? Other pods are working locally.
You aren't exactly pulling from your local registry, you are using your previously downloaded images or your locally builded, since you are specifying imagePullPolicy: Always this will always try to pull it from the registry.
Your image doesn't contain a specific docker registry personalisation-customer:latest for what docker will understand index.docker.io/personalisation-customer:latest and this is an image that doesn't exist in the public docker registry.
So you have 2 options imagePullPolicy: IfNotPresent or to upload the image to some registry.
The local Docker cache isn't a registry. Kubernetes tries to download the image from Dockerhub (the default registry), since you set iMagePullPolicy to Always. Set it to Never, so Kubernetes uses to local image.
I had the same issue. Issue was not mentioning the image version. I used
kubectl run testapp1 --image=<image> --image-pull-policy=IfNotPresent
instead of,
kubectl run testapp1 --image=<image>:<version> --image-pull-policy=IfNotPresent
For minikube, make the changes as follows:
Eval the docker env using:
eval $(minikube docker-env)
Build the docker image:
docket build -t my-image
Set the image name as only "my-image" in pod specification in your yaml file.
Set the imagePullPolicy to Never in you yaml file. Here is the example:
apiVersion:
kind:
metadata:
spec:
template:
metadata:
labels:
app: my-image
spec:
containers:
- name: my-image
image: "my-image"
imagePullPolicy: Never
When starting up a Kubernetes cluster, I load etcd plus the core kubernetes processes - kube-proxy, kube-apiserver, kube-controller-manager, kube-scheduler - as static pods from a private registry. This has worked in the past by ensuring that the $HOME environment variable is set to "/root" for kubelet, and then having /root/.docker/config.json defined with the credentials for the private docker registry.
When attempting to run Kubernetes 1.6, with CRI enabled, I get errors in the kubelet log saying it cannot pull the pause:3.0 container from my private docker registry due to no authentication.
Setting --enable-cri=false on the kubelet command line works, but when CRI is enabled, it doesn't seem to use the /root/.docker/config file for authentication.
Is there some new way to provide the docker credentials needed to load static pods when running with CRI enabled?
In 1.6, I managed to make it work with the following recipe in https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod
$ kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
You need to specify newly created myregistrykey as the credential under imagePullSecrets field in the pod spec.
apiVersion: v1
kind: Pod
metadata:
name: foo
namespace: awesomeapps
spec:
containers:
- name: foo
image: janedoe/awesomeapp:v1
imagePullSecrets:
- name: myregistrykey
It turns out that there is a deficiency in the CRI capabilities in Kubernetes 1.6. With CRI, the "pause" container - now called the "Pod Sandbox Image" is treated as a special case - because it is an "implementation detail" of the container runtime whether or not you even need it. In the 1.6 release, the credentials applied for other containers, from /root/.docker/config.json, for example, are not used when trying to pull the Pod Sandbox Image.
Thus, if you are trying to pull this image from a private registry, the CRI logic doesn't associate the credentials with the pull request. There is now a Kubernetes issue (#45738) to address this, targeted for 1.7.
In the meantime, an easy workaround is to pre-pull the "Pause" container into the node's local docker image cache before starting up the kubelet process.
My company uses it's own root CA and when I'm trying to pull images. Even from a private registry I'm getting error:
1h 3m 22 {kubelet minikube} Warning FailedSync Error syncing
pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull:
"image pull failed for gcr.io/google_containers/pause-amd64:3.0, this
may be because there are no credentials on this request.
details:
(Error response from daemon: Get https://gcr.io/v1/_ping: x509:
certificate signed by unknown authority)"
1h 10s 387 {kubelet minikube} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "POD" with
ImagePullBackOff: "Back-off pulling image
\"gcr.io/google_containers/pause-amd64:3.0\""
How I can install root CA to minkube or avoid this message i.e. Use only private registry, and don't pull anything from gcr.io ?
The only solution I've found so far is adding --insecure-registry gcr.io option to the minikube.
To address:
x509: certificate signed by unknown authority
Could you please try the following suggestion from Minikube repo?
copy the cert into the VM. The location should be:
/etc/docker/certs.d/
from here: https://docs.docker.com/engine/security/certificates/
ref
That thread also includes the following one-liner:
cat <certificatefile> \
| minikube ssh "sudo mkdir -p /etc/docker/certs.d/<domain> && sudo tee /etc/docker/certs.d/<domain>/ca.crt"
The issue here is the CA Trust chain of the Linux host that needs to be updated. The easiest way is to reboot the Linux host after copying the certs into the VM, if rebooting is not an option - look for a way to update-ca-certificates.
Just restarting the Docker Daemon will most likely not solve this issue
Note: allowing the Docker daemon to use insecure registries means certificates aren't verified.. while this may help, it does not solve the question asked here
A straight forward way to do this independent from minikube would be to use a imagePullSecrets configuration. As documented in the Pulling an Image from a Private Registry guide, you can create a Secret and use that along with your image like this:
apiVersion: v1
kind: Pod
metadata:
name: demo-pod
spec:
containers:
- name: private-container
image: <your-private-image>
imagePullSecrets:
- name: the_secret
Where the the_secret could be created with:
kubectl create secret docker-registry the_secret --docker-server=xxx --docker-username=xxx --docker-password=xxx --docker-email=xxx
Or with:
kubectl create secret generic the_secret --from-file=.docker/config.json
Or with anything else which is related to kubectl create secret - see documentation for details.
Edit: Even in the official minikube documentation you'll find that they use Secrets along with the registry-creds addon.
You'll find the generic documentation here and the documentation for the addon here.
It burns down to:
minikube addons enable registry-creds
But technically it does the same as described above.
You can add your companies certificate into the minikube kubernetes cluster by first copying the certificate to $HOME/.minikube/certs:
mkdir -p $HOME/.minikube/certs
cp my-company.pem $HOME/.minikube/certs/my-company.pem
And then starting minikube with the --embed-certs option:
minikube start --embed-certs
Source: Official documentation