I have two Ubuntu VMs created using Oracle Virtual Box on my Windows 11 laptop. I setup a k8s cluster using kubeadm with these two Ubuntu VMs, one of them is a master node and an another one is a worker node. Both the nodes are running with Ubuntu 20.04.3 LTS and docker://20.10.7. I deployed my spring boot app into the k8s cluster and exposed a node port service for my spring boot app with port 30000, but I am not really sure on how to access my node port service on the internet outside my cluster. Could you please help me with this issue?
Following are the IP address of my nodes in k8s cluster - master [192.168.254.94] and worker [192.168.254.95]. I tried with the following urls but none of them worked
http://192.168.254.94:30000/swagger-ui.html
http://192.168.254.95:30000/swagger-ui.html
These above urls throwing message which says refused to connect
http://192.168.9.13:30000/swagger-ui.html
http://192.168.9.14:30000/swagger-ui.html
These above urls says that the site cannot be reached
Below is the content of my application.yaml which I used for deploying the spring boot app and its corresponding service
apiVersion: apps/v1
kind: Deployment
metadata:
name: dealer-engine
spec:
replicas: 1
selector:
matchLabels:
app: dealer-engine
template:
metadata:
labels:
app: dealer-engine
spec:
containers:
- name: dealer-engine
image: moviepopcorn/dealer_engine:0.0.1
ports:
- containerPort: 9090
env:
- name: MONGO_URL
value: mongodb://mongo-service:27017/mazda
imagePullPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: dealer-engine
spec:
type: NodePort
selector:
app: dealer-engine
ports:
- port: 9091
targetPort: 9090
nodePort: 30000
externalIPs:
- 10.0.0.12
I am a beginner in k8s so please help me on how I can access my node port service outside my k8s cluster.
I created a new simple Springboot application which returns "Hello world!!!" back to the user when the following endpoint "/helloWorld" is invoked. I deployed this spring boot app into my k8s cluster using the below yaml configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: moviepopcorn/hello_world:0.0.1
ports:
- containerPort: 9091
imagePullPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: hello-world
spec:
type: NodePort
selector:
app: hello-world
ports:
- port: 9091
targetPort: 9091
nodePort: 30001
After successful deployment, I am able to access the helloWorld endpoint using the following url <K8S_MASTER_NODE_IP>:<NODE_PORT (30001)>.
Thank you all for your answers and inputs. Very much appreciated.
Have you install any CNI plugin like flannel?
If yes, Check your CIDR setting here
kubectl get node k8s-master -o yaml | grep podCIDR:
kubectl get configmap -n kube-system kube-flannel-cfg -o yaml | grep '"Network":'
Basically yes, CNI is must. flannel is the most simple one.
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network
You must deploy a Container Network Interface (CNI) based Pod network add-on so that your Pods can communicate with each other. Cluster DNS (CoreDNS) will not start up before a network is installed.
Download cni pulgin on every server.
# download cni plugin
wget https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-amd64-v0.9.1.tgz
sudo mkdir -p /opt/cni/bin
sudo tar -xzvf cni-plugins-linux-amd64-v0.9.1.tgz -C /opt/cni/bin
Reset you cluster
kubeadm reset
Init your cluster with cidr setting must same as flannel config, default 10.244.0.0
kubeadm init --pod-network-cidr=10.244.0.0/16
Apply cni plugin.
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
Related
I am going to access a Kubernetes NodePort service in MacOS in the browser, running on minikube, but I can't.
This is the Service definition file:
apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 30004
selector:
app: myapp
And this is the Pod difinition file:
apiVersion: v1
kind: Pod
metadata:
name: nginx-2
labels:
env: production
app: myapp
spec:
containers:
- name: nginx
image: nginx
And this is the Deployment definition file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
labels:
tier: frontend
app: myapp
spec:
selector:
matchLabels:
env: production
replicas: 6
template:
metadata:
name: nginx-2
labels:
env: production
spec:
containers:
- name: nginx
image: nginx
I cannot access the NodePort service using this URL on the browser:
http://localhost:30004
Also by entering minikube ip instead of localhost in the top command, a timeout happens.
And finally, by running the below command:
minikube service myapp-service --url
A sample output like this is generated:
http://127.0.0.1:53751
❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it.
But an the below error is shown to the user:
The connection was reset
Update : The issue is because the label app:myapp was not mentioned in the Deployment definition file -> template section
Look, your are not exposing your localhost to the nodePort. You are exposing the NodePort on your Kupernetes cluster.
So you have to access it http://nodeip:nodePort.
Think about what is localhost.
You have localhost on your Pc.
You have localhost on the VM (minkube node).
You have localhost in each container running inside your cluster.
If you want to use your Pc localhost to access the port inside a container in a Pod you can do this:
kubectl port-forward svc/serviceName reachablePortFromyourPc:containerPort
For example:
kubectl port-forward svc/serviceName 80:80
This starts a port forwarding. As long it is running you can access it from your browser.
This is good only for testing.
To access the NodePort use.
minikubeIp:NodePort
I created two replicas of nginx with following yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.20-alpine
ports:
- containerPort: 80
And I created service with:
apiVersion: v1
kind: Service
metadata:
name: nginx-test-service
spec:
selector:
app: nginx
ports:
- port: 8082
targetPort: 80
Everything looks good. But when I do
minikube service nginx-test-service
I am able to access the nginx. But when I see the two pods logs, the request is always going to single pod. The other pod is not getting any request.
But, kubernetes service should do the load balancing right?
Am I missing anything?
One way to get load balancing on-premise running is with ip virtual services. (ipvs). It;s a service which hands out ip's of the next pod to schedule/call
it's likely installed already.
lsmod | grep ip_vs
ip_vs_sh 16384 0
ip_vs_wrr 16384 0
ip_vs_rr 16384 19
Have your cni properly setup and run
kubectl edit cm -n kube-system kube-proxy
edit the ipvs section
set mode to ipvs
mode: "ipvs"
and the ipvs section
ipvs:
excludeCIDRs: null
minSyncPeriod: 0s
scheduler: "rr"
As always there are lots of variables biting each other with k8s, but it is possible with ipvs.
https://kubernetes.io/blog/2018/07/09/ipvs-based-in-cluster-load-balancing-deep-dive/
I have a very simple Python app that works fine when I execute uvicorn main:app --reload. When I go to http://127.0.0.1:8000 on my machine, I'm able to interact with the API. (My app has no frontend, it is just an API built with FastAPI). However, I am trying to deploy this via Kubernetes, but am not sure how I can access/interact with my API.
Here is my deployment.yaml.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.16.1
ports:
- containerPort: 80
When I enter kubectl describe deployments my-deployment in the terminal, I get back a print out of the deployment, the namespace it is in, the pod template, a list of events, etc. So, I am pretty sure it is properly deployed.
How can I access the application? What would the url be? I have tried a variety of localhost + port combinations to no avail. I am new to kubernetes so I'm trying to understand how this works.
Update:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
namespace: default
spec:
selector:
matchLabels:
app: web
replicas: 2
template:
metadata:
labels:
app: web
spec:
containers:
- name: site
image: nginx:1.16.1
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: app-entrypoint
namespace: default
spec:
type: NodePort
selector:
app: web
ports:
- port: 80
targetPort: 80
nodePort: 30001
Again, when I use the k8s CLI, I'm able to see my deployment, yet when I hit localhost:30001, I get an Unable to connect message.
You have given containerPort: 80 but if your app listens on port 8080 change it to 8080.
There are different ways to access an application deployed on kubernetes
Port Forward using kubectl port-forward deployment/my-deployment 8080:8080
Creare a NodePort service and use http://<NODEIP>:<NODEPORT>
Create a LoadBalanceer service. This works only in supported cloud environment such as AWS, GKE etc.
Use ingress controller such nginx to expose the application.
By Default k8s application are exposed only within the cluster, if you want to access it from outside of the cluster then you can select any of the below options:
Expose Deployment as a node port service (kubectl expose deployment my-deployment --name=my-deployment-service --type=NodePort), describe the service and get the node port assigned to it (kubectl describe svc my-deployment-service). Then try http://<node-IP:node-port>/
For production grade cluster the best practice is to use LoadBalancer type (kubectl expose deployment my-deployment --name=my-deployment-service --type=LoadBalancer --target-port=8080) as part of this service you get an external IP which can be used to access your service http://EXTERNAL-IP:8080/
You can also see the details about the endpoint using kubectl get ep
Thanks,
I have a simple microservice setup running in a minikube cluster. It is inspired by this example.
My setup includes a simple router microservice that contains a golang webserver. What I want to test now is the loadbalancing when there is more then one pod. But there seems to be no load-balancing whatsoever.
The kubernetes file for the microservices looks like this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: router
labels:
app: router
tier: router
spec:
replicas: 2
strategy: {}
template:
metadata:
labels:
app: router
tier: router
spec:
containers:
- image: {myregistry}/router
name: router
resources: {}
ports:
- name: target-port
containerPort: 8082
env:
- name: PORT
value: "8082"
status: {}
---
apiVersion: v1
kind: Service
metadata:
name: router
labels:
app: router
tier: router
spec:
type: LoadBalancer
selector:
app: router
tier: router
ports:
- port: 8082
name: http
targetPort: target-port
The skaffold config looks like this:
apiVersion: skaffold/v1beta2
kind: Config
build:
artifacts:
- image: {myregistry}/router
context: src/router/bin
tagPolicy:
gitCommit: {}
local:
push: false
deploy:
kubectl:
manifests:
- ./kubernetes/**.yaml
Kubernetes correctly deploys two pods. The output of kubectl get pods looks like this:
NAME READY STATUS RESTARTS AGE
router-7f75f6f9df-c8mgp 1/1 Running 0 14m
router-7f75f6f9df-k248m 1/1 Running 0 14m
From the skaffold dev log output I can see that every request is routed to the router-7f75f6f9df-c8mgp pod. Even with different browsers all requests end up at the exact same pod.
When I delete this pod there is even a slight downtime of the router microservice even though there is another pod running.
What could be the problem of this behavior?
minikube doesn't 'properly' support the LoadBalancer service type. It used to be commonplace to just use the NodePort or externalIP service type instead, however the official hello-minikube sample now states:
On cloud providers that support load balancers, an external IP address
would be provisioned to access the Service. On Minikube, the
LoadBalancer type makes the Service accessible through the minikube
service command
So effectively you should be able to use your minikube LoadBalancer service with: minikube service router
However there is a neat solution that was developed for bare-metal kubernetes clusters called metallb that may be able to help you test this in a better way on minikube.
You can install and configure it on minikube. E.g.
kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.8.1/manifests/metallb.yaml
Here are some blog posts where others have explained the setup and use of metallb with minikube for LoadBalancer support:
Blog Post 1
Blog Post 2
Here are the official docs.
Hope that helps!
This is the way I understand the flow in question:
When requesting a kubernetes service (via http for example) I am using port 80.
The request is forwarded to a pod (still on port 80)
The port forwards the request to the (docker) container that exposes port 80
The container handles the request
However my container exposes a different port, let's say 3000.
How can make a port mapping like 80:3000 in step 2 or 3?
There are confusing options like targetport and hostport in the kubernetes docs which didn't help me. kubectl port-forward seems to forward only my local (development) machine's port to a specific pod for debugging.
These are the commands I use for setting up a service in the google cloud:
kubectl run test-app --image=eu.gcr.io/myproject/my_app --port=80
kubectl expose deployment test-app --type="LoadBalancer"
I found that I needed to add some arguments to my second command:
kubectl expose deployment test-app --type="LoadBalancer" --target-port=3000 --port=80
This creates a service which directs incoming http traffic (on port 80) to its pods on port 3000.
A nicer way to do this whole thing is with yaml files service.yaml and deployment.yaml and calling
kubectl create -f deployment.yaml
kubectl create -f service.yaml
where the files have these contents
# deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: test-app
spec:
containers:
- name: user-app
image: eu.gcr.io/myproject/my_app
ports:
- containerPort: 3000
and
# service.yaml
apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
selector:
app: test-app
ports:
- port: 80
targetPort: 3000
type: LoadBalancer
Note that the selector of the service must match the label of the deployment.