I'm using minikube on macOS 10.12 and trying to use a private image hosted at docker hub. I know that minikube launches a VM that as far as I know will be the unique node of my local kubernetes cluster and that will host all my pods.
I read that I could use the VM's docker runtime by running eval $(minikube docker-env). So I used those variables to change from my local docker runtime to the other. Running docker images I could see that the change was done effectively.
My next step was to log in at docker hub using docker login and then pulling my image manually, which ended without error. After that I thought that the image will by ready to by be used by any pod in the cluster but I'm always getting ImagePullBackOff. I also tried to ssh into the VM via minikube ssh and the result is the same, the image is there to be used but for some reason I don't know it's refusing to use it.
In case it helps, this is my deployment description file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: web-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: godraude/nginx
imagePullPolicy: Always
ports:
- containerPort: 80
- containerPort: 443
And this is the output of kubectl describe pod <podname>:
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1m 1m 1 {default-scheduler } Normal Scheduled Successfully assigned web-deployment-2451628605-vtbl8 to minikube
1m 23s 4 {kubelet minikube} spec.containers{nginx} Normal Pulling pulling image "godraude/nginx"
1m 20s 4 {kubelet minikube} spec.containers{nginx} Warning Failed Failed to pull image "godraude/nginx": Error: image godraude/nginx not found
1m 20s 4 {kubelet minikube} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "nginx" with ErrImagePull: "Error: image godraude/nginx not found"
1m 4s 5 {kubelet minikube} spec.containers{nginx} Normal BackOff Back-off pulling image "godraude/nginx"
1m 4s 5 {kubelet minikube} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "nginx" with ImagePullBackOff: "Back-off pulling image \"godraude/nginx\""
i think what u need is to create a secrete which will tell kube from where it can pull your private image and its credentials
kubectl create secret docker-registry my-secret --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
below command to list your secretes
kubectl get secret
NAME TYPE DATA AGE
my-secret kubernetes.io/dockercfg 1 100d
now in deployment defination u need to define whcih secret to use
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: web-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: godraude/nginx
imagePullPolicy: Always
ports:
- containerPort: 80
- containerPort: 443
imagePullSecrets:
- name: my-secret
The problem was the image pull policy. It was set to Always so docker was trying to pull the imagen even if it was present. Setting imagePullPolicy: Never solved the issue.
Related
Linux container pod, with docker images from Azure Container registry, keeps restarting with restartPolicy as Always. Pod description is as below.
kubectl describe pod example-pod
...
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 11 Jun 2020 03:27:11 +0000
Finished: Thu, 11 Jun 2020 03:27:12 +0000
...
Back-off restarting failed container
This pod is created with secret to access ACR registry repository.
Reason is that pod completes execution successfully with exit code 0. However, It should keep listening at particular port number. Microsoft document link is at this URL Container Group Runtime under header "Container continually exits and restarts"
deployment-example.yml file content is as below.
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-deployment
namespace: development
labels:
app: example
spec:
replicas: 1
selector:
matchLabels:
app: example
template:
metadata:
labels:
app: example
spec:
containers:
- name: example
image: contentocr.azurecr.io/example:latest
#command: ["ping -t localhost"]
imagePullPolicy: Always
ports:
- name: http-port
containerPort: 3000
imagePullSecrets:
- name: regpass
restartPolicy: Always
nodeSelector:
agent: linux
---
apiVersion: v1
kind: Service
metadata:
name: example
namespace: development
labels:
app: example
spec:
ports:
- name: http-port
port: 3000
targetPort: 3000
selector:
app: example
type: LoadBalancer
Output of kubectl get events is as below.
3m39s Normal Scheduled pod/example-deployment-5dc964fcf8-gbm5t Successfully assigned development/example-deployment-5dc964fcf8-gbm5t to aks-agentpool-18342716-vmss000000
2m6s Normal Pulling pod/example-deployment-5dc964fcf8-gbm5t Pulling image "contentocr.azurecr.io/example:latest"
2m5s Normal Pulled pod/example-deployment-5dc964fcf8-gbm5t Successfully pulled image "contentocr.azurecr.io/example:latest"
2m5s Normal Created pod/example-deployment-5dc964fcf8-gbm5t Created container example
2m49s Normal Started pod/example-deployment-5dc964fcf8-gbm5t Started container example
2m20s Warning BackOff pod/example-deployment-5dc964fcf8-gbm5t Back-off restarting failed container
6m6s Normal SuccessfulCreate replicaset/example-deployment-5dc964fcf8 Created pod: example-deployment-5dc964fcf8-2fdt5
3m39s Normal SuccessfulCreate replicaset/example-deployment-5dc964fcf8 Created pod: example-deployment-5dc964fcf8-gbm5t
6m6s Normal ScalingReplicaSet deployment/example-deployment Scaled up replica set example-deployment-5dc964fcf8 to 1
3m39s Normal ScalingReplicaSet deployment/example-deployment Scaled up replica set example-deployment-5dc964fcf8 to 1
3m38s Normal EnsuringLoadBalancer service/example Ensuring load balancer
3m34s Normal EnsuredLoadBalancer service/example Ensured load balancer
Docker file entry point is like ENTRYPOINT ["npm", "start"] with CMD ["tail -f /dev/null/"]
It runs locally. Implicitly, it assigns CI="true" flag. However, in docker-compose stdin_open: true or tty: true is to be set and in Kubernetes deployment file, ENV named variable CI is to be set up with value "true".
The below command solved my problem:-
az aks update -n aks-nks-k8s-cluster -g aks-nks-k8s-rg --attach-acr aksnksk8s
After executing the above command, below will be displayed:-
Add ROLE Propagation done [###############] 100.0000%
and then,
Running.. followed by Response trail after some time.
Here,
aks-nks-k8s-cluster : Cluster name I have created and using
aks-nks-k8s-rg : Resource Group have created and using
aksnksk8s : Container Registries which I have created and using
Is there a way to request the status of a readinessProbe by using a service name linked to a deployment ? In an initContainer for example ?
Imagine we have a deployment X, using a readinessProbe, a service linked to it so we can request for example http://service-X:8080.
Now we create a deployment Y, in the initContainer we want to know if deployment X is ready. Is there a way to ask something likedeployment-X.ready or service-X.ready ?
I know that the correct way to handle dependencies is to let kubernetes do it for us, but i have a container which doesn't crash and I have no hand on it...
You can add a ngnix proxy sidecar on deployment Y.
Set the deploymentY.initContainer.readynessProbe to a port on nginx and that port is proxied to deploymentY.readynessProbe
Instead of readinessProbe You can use just InitContainer.
You create a pod/deployment X, make service X, and create a initContainer which is searching for the service X.
If he find it -> he will make the pod.
If he won't find it -> he will keep looking until service X will be created.
Just a simple example, we create nginx deployment by using kubectl apply -f nginx.yaml.
nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
run: my-nginx
replicas: 2
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx
ports:
- containerPort: 80
Then we create initContainer
initContainer.yaml
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox:1.28
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
initContainers:
- name: init-myservice
image: busybox:1.28
command: ['sh', '-c', 'until nslookup my-nginx; do echo waiting for myapp-pod2; sleep 2; done;']
initContainer will look for service my-nginx, until You create it ,it will be in Init:0/1 status.
NAME READY STATUS RESTARTS AGE
myapp-pod 0/1 Init:0/1 0 15m
After You add service for example by using kubectl expose deployment/my-nginx and initContainer will find my-nginx service, he will be created.
NAME READY STATUS RESTARTS AGE
myapp-pod 1/1 Running 0 35m
Result:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/myapp-pod to kubeadm2
Normal Pulled 20s kubelet, kubeadm2 Container image "busybox:1.28" already present on machine
Normal Created 20s kubelet, kubeadm2 Created container init-myservice
Normal Started 20s kubelet, kubeadm2 Started container init-myservice
Normal Pulled 20s kubelet, kubeadm2 Container image "busybox:1.28" already present on machine
Normal Created 20s kubelet, kubeadm2 Created container myapp-container
Normal Started 20s kubelet, kubeadm2 Started container myapp-container
Let me know if that answer your question.
I finaly found a solution by following this link :
https://blog.giantswarm.io/wait-for-it-using-readiness-probes-for-service-dependencies-in-kubernetes/
We first need to create a ServiceAccount in Kubernetes to allow listing endpoints from an initContainer. After this, we ask for the available endpoints, if there is at least one, dependency is ready (in my case).
I'm trying to deploy my NodeJS application to EKS and run 3 pods with exactly the same container.
Here's the error message:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
cm-deployment-7c86bb474c-5txqq 0/1 Pending 0 18s
cm-deployment-7c86bb474c-cd7qs 0/1 ImagePullBackOff 0 18s
cm-deployment-7c86bb474c-qxglx 0/1 ImagePullBackOff 0 18s
public-api-server-79b7f46bf9-wgpk6 0/1 ImagePullBackOff 0 2m30s
$ kubectl describe pod cm-deployment-7c86bb474c-5txqq
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 23s (x4 over 2m55s) default-scheduler 0/3 nodes are available: 3 Insufficient pods.
So it says that 0/3 nodes are available However, if I run
kubectl get nodes --watch
$ kubectl get nodes --watch
NAME STATUS ROLES AGE VERSION
ip-192-168-163-73.ap-northeast-2.compute.internal Ready <none> 6d7h v1.14.6-eks-5047ed
ip-192-168-172-235.ap-northeast-2.compute.internal Ready <none> 6d7h v1.14.6-eks-5047ed
ip-192-168-184-236.ap-northeast-2.compute.internal Ready <none> 6d7h v1.14.6-eks-5047ed
3 pods are running.
here are my configurations:
aws-auth-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: [MY custom role ARN]
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: cm-deployment
spec:
replicas: 3
selector:
matchLabels:
app: cm-literal
template:
metadata:
name: cm-literal-pod
labels:
app: cm-literal
spec:
containers:
- name: cm
image: docker.io/cjsjyh/public_test:1
imagePullPolicy: Always
ports:
- containerPort: 80
#imagePullSecrets:
# - name: regcred
env:
[my environment variables]
I applied both .yaml files
How can I solve this?
Thank you
My guess, without running the manifests you've got is that the image tag 1 on your image doesn't exist, so you're getting ImagePullBackOff which usually means that the container runtime can't find the image to pull .
Looking at the Docker Hub page there's no 1 tag there, just latest.
So, either removing the tag or replace 1 with latest may resolve your issue.
I experienced this issue with aws instance types with low resources
I'm new to k8s, so some of my terminology might be off. But basically, I'm trying to deploy a simple web api: one load balancer in front of n pods (where right now, n=1).
However, when I try to visit the load balancer's IP address it doesn't show my web application. When I run kubectl get deployments, I get this:
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
tl-api 1 1 1 0 4m
Here's my YAML file. Let me know if anything looks off--I'm very new to this!
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: tl-api
spec:
replicas: 1
template:
metadata:
labels:
app: tl-api
spec:
containers:
- name: tl-api
image: tlk8s.azurecr.io/devicecloudwebapi:v1
ports:
- containerPort: 80
imagePullSecrets:
- name: acr-auth
nodeSelector:
beta.kubernetes.io/os: windows
---
apiVersion: v1
kind: Service
metadata:
name: tl-api
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: tl-api
Edit 2: When I try using ACS (which supports Windows), I get this:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 11m default-scheduler Successfully assigned tl-api-3466491809-vd5kg to dc9ebacs9000
Normal SuccessfulMountVolume 11m kubelet, dc9ebacs9000 MountVolume.SetUp succeeded for volume "default-token-v3wz9"
Normal Pulling 4m (x6 over 10m) kubelet, dc9ebacs9000 pulling image "tlk8s.azurecr.io/devicecloudwebapi:v1"
Warning FailedSync 1s (x50 over 10m) kubelet, dc9ebacs9000 Error syncing pod
Normal BackOff 1s (x44 over 10m) kubelet, dc9ebacs9000 Back-off pulling image "tlk8s.azurecr.io/devicecloudwebapi:v1"
I then try examining the failed pod:
PS C:\users\<me>\source\repos\DeviceCloud\DeviceCloud\1- Presentation\DeviceCloud.Web.API> kubectl logs tl-api-3466491809-vd5kg
Error from server (BadRequest): container "tl-api" in pod "tl-api-3466491809-vd5kg" is waiting to start: trying and failing to pull image
When I run docker images I see the following:
REPOSITORY TAG IMAGE ID CREATED SIZE
devicecloudwebapi latest ee3d9c3e231d 24 hours ago 7.85GB
tlk8s.azurecr.io/devicecloudwebapi v1 ee3d9c3e231d 24 hours ago 7.85GB
devicecloudwebapi dev bb33ab221910 25 hours ago 7.76GB
Your problem is that the container image tlk8s.azurecr.io/devicecloudwebapi:v1 is in a private container registry. See the events at the bottom of the following command:
$ kubectl describe po -l=app=tl-api
The official Kubernetes docs describe how to resolve this issue, see Pull an Image from a Private Registry, essentially:
Create a secret kubectl create secret docker-registry
Use it in your deployment, under the spec.imagePullSecrets key
I try to configure Kubernetes to pull images from our private Artifactory Docker repo.
First I configured a secret with kubectl:
kubectl create secret docker-registry artifactorysecret --docker-server=ourcompany.jfrog.io/path/list/docker-repo/ --docker-username=artifactory-user --docker-password=artipwd --docker-email=myemail
After creating a pod using kubectl with
apiVersion: v1
kind: Pod
metadata:
name: base-infra
spec:
containers:
- name: api-gateway
image: api-gateway
imagePullSecrets:
- name: artifactorysecret
I get a "ImagePullBackOff" error in Kubernetes:
3m 3m 1 default-scheduler Normal
Scheduled Successfully assigned consort-base-infra to k8s-agent-ab2f29b2-2
3m 0s 5 kubelet, k8s-agent-ab2f29b2-2 spec.containers{api-gateway} Normal
Pulling pulling image "api-gateway"
2m <invalid> 5 kubelet, k8s-agent-ab2f29b2-2 spec.containers{api-gateway} Warning
Failed Failed to pull image "api-gateway": rpc error: code = 2 desc = Error: image library/api-gateway:latest not found
2m <invalid> 5 kubelet, k8s-agent-ab2f29b2-2 Warning
FailedSync Error syncing pod, skipping: failed to "StartContainer" for "api-gateway" with ErrImagePull: "rpc error: code = 2 desc = Error: image library/api-gateway:latest not found"
2m <invalid> 17 kubelet, k8s-agent-ab2f29b2-2 spec.containers{api-gateway} Normal BackOff
Back-off pulling image "api-gateway"
2m <invalid> 17 kubelet, k8s-agent-ab2f29b2-2 Warning FailedSync
Error syncing pod, skipping: failed to "StartContainer" for "api-gateway" with ImagePullBackOff: "Back-off pulling image \"api-gateway\""
There is of course a latest version in the repo. I don't know what I'm missing here. It seems Kubernetes is able to log in to the repo...
Ok - I found out to connect Artifactory thanks to Pull image Azure Container Registry - Kubernetes
There are two things to pay attention to:
1) in the secret definition don't forget https:// in the server-attribute:
kubectl create secret docker-registry regsecret --docker-server=https://our-repo.jfrog.io --docker-username=myuser --docker-password=<your-pword> --docker-email=<your-email>
2) in the deployment descriptor use the full image path and specify the secret (or append it to the default ServiceAccount):
apiVersion: v1
kind: Pod
metadata:
name: consort-base-infra-art
spec:
containers:
- name: api-gateway
image: our-repo.jfrog.io/api-gateway
imagePullSecrets:
- name: regsecret