Kubernetes doesn't recognises local docker image - kubernetes

I have the following deployment and i use the image:flaskapp1:latest
apiVersion: apps/v1
kind: Deployment
metadata:
name: flaskapp
labels:
app: flaskapp
spec:
selector:
matchLabels:
app: flaskapp
replicas: 1
template:
metadata:
labels:
app: flaskapp
spec:
containers:
- name: flaskapp
image: flaskapp1:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: flaskapp
spec:
ports:
- name: http
port: 9090
targetPort: 8080
selector:
app: flaskapp
Because the kubernetes cluster that i have created has only 2 nodes(master node and worker node) the pod is created in worker node where i have locally created the docker image.
More specific if i run
sudo docker images
I have the follwing output:
REPOSITORY TAG IMAGE ID CREATED SIZE
flaskapp1 latest 4004bc4ea926 34 minutes ago 932MB
For some reason when i apply the deployment above the status is ErrImagePull. Is there any wrong in my yaml file?
When i run kubectl get pods -o wide i have the following output:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
flaskapp-55dbdfd6cf-952v8 0/1 ImagePullBackOff 0 7m44s 192.168.69.220 knode2 <none> <none>
Also

Glad, that it works now, but I would like to add some information for others, that might encounter this problem.
In general use a real registry to provide the images. This can be hosted using kubernetes as well, but it must be accessible from outside since the nodes will access it directly to retrieve images.
You should provide TLS secured access to the registry since container runtimes will not allow access to external hosts without a certificate or special configuration.
If you want to experiment with images and don't want to use a public registry or run your own you might be interested in an ephemeral registry: https://ttl.sh/

Related

Why is my service always routing to the same pod?

I have a simple Webserver that exposes the pod name on which it is located by using the OUT env var.
Deployment and service look like this:
apiVersion: v1
kind: Service
metadata:
name: simpleweb-service
spec:
selector:
app: simpleweb
ports:
- protocol: TCP
port: 8080
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: simpleweb-deployment
labels:
app: simpleweb
spec:
replicas: 3
selector:
matchLabels:
app: simpleweb
template:
metadata:
labels:
app: simpleweb
spec:
containers:
- name: simpleweb
env:
- name: OUT
valueFrom:
fieldRef:
fieldPath: metadata.name
imagePullPolicy: Never
image: simpleweb
ports:
- containerPort: 8080
I deploy this on my local kind cluster
default simpleweb-deployment-5465f84584-m59n5 1/1 Running 0 12m
default simpleweb-deployment-5465f84584-mw8vj 1/1 Running 0 9m36s
default simpleweb-deployment-5465f84584-x6n74 1/1 Running 0 12m
and access it via
kubectl port-forward service/simpleweb-service 8080:8080
When I am hitting localhost:8080 I always get to the same pod
Questions:
Is my service not doing round robin?
Is there some caching that I am not aware of
Do I have to expose my service differently? Is this a kind issue?
port-forward will only select the first pod for a service selector. If you want round-robin you'd need to use a load balancer like traefik or nginx.
https://github.com/kubernetes/kubectl/blob/652881798563c00c1895ded6ced819030bfaa4d7/pkg/polymorphichelpers/attachablepodforobject.go#L52
To do round-robin and route to different services I have to use a LoadBalancer service. There is MetalLB that implements a LB for Kind. Unfortunately it currently does not support Apply M1 machines.
I assume that MetallLB + LoadBalancer Service would work on a different machine.

Access NodePort Service Outside Kubeadm K8S Cluster

I have two Ubuntu VMs created using Oracle Virtual Box on my Windows 11 laptop. I setup a k8s cluster using kubeadm with these two Ubuntu VMs, one of them is a master node and an another one is a worker node. Both the nodes are running with Ubuntu 20.04.3 LTS and docker://20.10.7. I deployed my spring boot app into the k8s cluster and exposed a node port service for my spring boot app with port 30000, but I am not really sure on how to access my node port service on the internet outside my cluster. Could you please help me with this issue?
Following are the IP address of my nodes in k8s cluster - master [192.168.254.94] and worker [192.168.254.95]. I tried with the following urls but none of them worked
http://192.168.254.94:30000/swagger-ui.html
http://192.168.254.95:30000/swagger-ui.html
These above urls throwing message which says refused to connect
http://192.168.9.13:30000/swagger-ui.html
http://192.168.9.14:30000/swagger-ui.html
These above urls says that the site cannot be reached
Below is the content of my application.yaml which I used for deploying the spring boot app and its corresponding service
apiVersion: apps/v1
kind: Deployment
metadata:
name: dealer-engine
spec:
replicas: 1
selector:
matchLabels:
app: dealer-engine
template:
metadata:
labels:
app: dealer-engine
spec:
containers:
- name: dealer-engine
image: moviepopcorn/dealer_engine:0.0.1
ports:
- containerPort: 9090
env:
- name: MONGO_URL
value: mongodb://mongo-service:27017/mazda
imagePullPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: dealer-engine
spec:
type: NodePort
selector:
app: dealer-engine
ports:
- port: 9091
targetPort: 9090
nodePort: 30000
externalIPs:
- 10.0.0.12
I am a beginner in k8s so please help me on how I can access my node port service outside my k8s cluster.
I created a new simple Springboot application which returns "Hello world!!!" back to the user when the following endpoint "/helloWorld" is invoked. I deployed this spring boot app into my k8s cluster using the below yaml configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: moviepopcorn/hello_world:0.0.1
ports:
- containerPort: 9091
imagePullPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: hello-world
spec:
type: NodePort
selector:
app: hello-world
ports:
- port: 9091
targetPort: 9091
nodePort: 30001
After successful deployment, I am able to access the helloWorld endpoint using the following url <K8S_MASTER_NODE_IP>:<NODE_PORT (30001)>.
Thank you all for your answers and inputs. Very much appreciated.
Have you install any CNI plugin like flannel?
If yes, Check your CIDR setting here
kubectl get node k8s-master -o yaml | grep podCIDR:
kubectl get configmap -n kube-system kube-flannel-cfg -o yaml | grep '"Network":'
Basically yes, CNI is must. flannel is the most simple one.
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network
You must deploy a Container Network Interface (CNI) based Pod network add-on so that your Pods can communicate with each other. Cluster DNS (CoreDNS) will not start up before a network is installed.
Download cni pulgin on every server.
# download cni plugin
wget https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-amd64-v0.9.1.tgz
sudo mkdir -p /opt/cni/bin
sudo tar -xzvf cni-plugins-linux-amd64-v0.9.1.tgz -C /opt/cni/bin
Reset you cluster
kubeadm reset
Init your cluster with cidr setting must same as flannel config, default 10.244.0.0
kubeadm init --pod-network-cidr=10.244.0.0/16
Apply cni plugin.
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

Kubernetes Pod unable to communicate with another Pod using a Service

I have 2 Pods with 1 container each. The container names are:
mc1
mc2
mc1 container hosts an asp.net core razor pages app while mc2 hosts a web api app. Now mc1 has to communicate with mc2 i.e. razor page app has to call web api app.
I have tried to explain it in the below image:
I created 2 deployments for these 2 pods:
name: dep1
labels:
app: mc-app1
spec:
replicas: 1
selector:
matchLabels:
app: mc-app1
template:
metadata:
labels:
app: mc-app1
spec:
containers:
- name: mc1
image: multiapp
imagePullPolicy: Never
ports:
- containerPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dep2
labels:
app: mc-app2
spec:
replicas: 1
selector:
matchLabels:
app: mc-app2
template:
metadata:
labels:
app: mc-app2
spec:
containers:
- name: mc2
image: multiapi
imagePullPolicy: Never
ports:
- containerPort: 80
I also created a service for the POD containing the mc2 container (i.e. web api app).
apiVersion: v1
kind: Service
metadata:
name: multi-container-service-2
spec:
type: NodePort
selector:
app: mc-app2
ports:
- port: 8080
targetPort: 80
The deployments and services are successfully applied to the k8s cluster.
Next, I am entering the container "mc1" and trying to curl the service called multi-container-service-2 but this is not working.
I am getting error:
curl: (7) Failed to connect to multi-container-service-2 port 80: Connection refused
In the below image I am entering the shell of the container mc1 with the command:
kubectl exec -it dep1-5c78b8c889-tjzzr -c mc1 -- /bin/bash
In the next image I am doing the curl which is giving error:
Note that I have already installed curl using the 2 commands given below:
apt-get update
apt-get install curl
Why can't the app in mc2 container be called using the service? My Operating system is windows 10.
I am taking the help of these 2 tutorials:
Build ASP.NET Core applications deployed as Linux containers into an AKS/Kubernetes orchestrator
Communicate Between Containers in the Same Pod Using a Shared Volume
You have set to service port to 8080, but you are calling the service on port 80 (which is the container's port).
This should work:
curl http://multi-container-service-2:8080
As stated in the official kubernetes documentation:
Kubernetes creates DNS records for services and pods. You can contact services with consistent DNS names instead of IP addresses.
In order to communicate Pod-to-Pod through a service in your cluster you have to use the following syntax :
{service_name}.{namespace}.svc.cluster.local
So in your case, with curl it would be :
curl multi-container-service-2.default.svc.cluster.local

how to deploy 2 docker container that has to be linked together using yaml on kubernetes?

I have 2 containers (tomcat server and mysql db).These containers have to be linked and deployed using yaml files over kubernetes using kubectl -f apply command.The problem is I dont know how the yaml file for deployment should look like when I have to link the 2 containers for deployment.
the dockerfile i have used to build the tomcat images is
FROM tomcat
COPY app.war /usr/local/tomcat/webapps/
I tried using the --link attribute of the docker run command but I want to do this using kubernetes ie using yaml files.so kindly tell what changes I have to make for my deployment.yaml and service.yaml files in order to link the containers(tomcat and mysql) and deploy them on the root node master.
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcat-pod
spec:
selector:
matchLabels:
run: tomcat-pod
replicas: 1
template:
metadata:
labels:
run: tomcat-pod
spec:
containers:
- name: tomcat
image: tomcat:warfile
ports:
- containerPort: 8080
service.yaml
apiVersion: v1
kind: Service
metadata:
name: tomcat-pod
labels:
run: tomcat-pod
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
selector:
run: tomcat-pod
the tomcat container which has the war file should be able to interact with the mysql container and fetch the data from database that has to be displayed after deployment but currently I am able to only see the tomcat homepage and not the output of running the war file presnt in the webapps folder.
If you mean to communicate two apps with link you have to create Kubernetes Service object for those apps.
For example i have simpleservice in my cluster:
argela#etcd1:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5h23m
simpleservice ClusterIP 10.96.219.103 <none> 80/TCP 179m
To reach that app from a pod i have to use (assume it is a http service) http://simpleservice/
This tutorial explains the service concept with examples.

expose kubernetes pod to internet

I created a pod with an api and web docker container in kuberneters using a yml file (see below).
apiVersion: v1
kind: Pod
metadata:
name: test
labels:
purpose: test
spec:
containers:
- name: api
image: gcr.io/test-1/api:latest
ports:
- containerPort: 8085
name: http
protocol: TCP
- name: web
image: gcr.io/test-1/web:latest
ports:
- containerPort: 5000
name: http
protocol: TCP
It show my pod is up and running
NAME READY STATUS RESTARTS AGE
test 2/2 Running 0 5m
but I don't know how to expose it from here.
it seems odd I would have to run kubectl run .... again as the pod is already running. It does not show a deployment though.
if I try something like
kubectl expose deployment test --type="NodePort"--port 80 --target-port 5000
it complains about deployments.extensions "test' not found. What is the cleanest way to deploy from here?
To expose a deployment to the public internet, you will want to use a Service. The service type LoadBalancer handles this nicely, as you can just use pod selectors in the yaml file.
So if my deployment.yaml looks like this:
kind: Deployment
apiVersion: apps/v1beta2
metadata:
name: test-dply
spec:
selector:
# Defines the selector that can be matched by a service for this
deployment
matchLabels:
app: test_pod
template:
metadata:
labels:
# Puts the label on the pod, this must match the matchLabels
selector
app: test_pod
spec:
# Our containers for training each model
containers:
- name: mycontainer
image: myimage
imagePullPolicy: Always
command: ["/bin/bash"]
ports:
- name: containerport
containerPort: 8085
Then the service that would link to it is:
kind: Service
apiVersion: v1
metadata:
# Name of our service
name: prodigy-service
spec:
# LoadBalancer type to allow external access to multiple ports
type: LoadBalancer
selector:
# Will deliver external traffic to the pod holding each of our containers
app: test_pod
ports:
- name: sentiment
protocol: TCP
port: 80
targetPort: containerport
You can deploy these two items by using kubectl create -f /path/to/dply.yaml and kubectl create -f /path/to/svc.yaml. Quick note: The service will allocate a public IP address, which you can find using kubectl get services with the following output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
carbon-relay ClusterIP *.*.*.* <none> 2003/TCP 78d
comparison-api LoadBalancer *.*.*.* *.*.*.* 80:30920/TCP 15d
It can take several minutes to allocate the ip, just a forewarning. But the LoadBalancer's ip is fixed, and you can delete the pod that it points to and re-spin it without consequence. So if I want to edit my test.dply, I can without worrying about my service being impacted. You should rarely have to spin down services
You have created a pod, not a deployment.
Then you have exposed a deployment (and not your pod).
Try:
kubectl expose pod test --type=NodePort --port=80 --target-port=5000
kubectl expose pod test --type=LoadBalancer --port=XX --target-port=XXXX
If you already have pod and service running, you can create an ingress for the service you want to expose to the internet.
If you want to create it through console, Google Cloud provides really easy way to create an ingress from an existing service. Go to Services and Ingress tab, select the service, click on create ingress, fill the name and other mandatory fields.
or you can create using yaml file
apiVersion: "networking.k8s.io/v1"
kind: "Ingress"
metadata:
name: "example-ingress"
namespace: "default"
spec:
defaultBackend:
service:
name: "example-service"
port:
number: 8123
status:
loadBalancer: {}