I have a simple microservice setup running in a minikube cluster. It is inspired by this example.
My setup includes a simple router microservice that contains a golang webserver. What I want to test now is the loadbalancing when there is more then one pod. But there seems to be no load-balancing whatsoever.
The kubernetes file for the microservices looks like this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: router
labels:
app: router
tier: router
spec:
replicas: 2
strategy: {}
template:
metadata:
labels:
app: router
tier: router
spec:
containers:
- image: {myregistry}/router
name: router
resources: {}
ports:
- name: target-port
containerPort: 8082
env:
- name: PORT
value: "8082"
status: {}
---
apiVersion: v1
kind: Service
metadata:
name: router
labels:
app: router
tier: router
spec:
type: LoadBalancer
selector:
app: router
tier: router
ports:
- port: 8082
name: http
targetPort: target-port
The skaffold config looks like this:
apiVersion: skaffold/v1beta2
kind: Config
build:
artifacts:
- image: {myregistry}/router
context: src/router/bin
tagPolicy:
gitCommit: {}
local:
push: false
deploy:
kubectl:
manifests:
- ./kubernetes/**.yaml
Kubernetes correctly deploys two pods. The output of kubectl get pods looks like this:
NAME READY STATUS RESTARTS AGE
router-7f75f6f9df-c8mgp 1/1 Running 0 14m
router-7f75f6f9df-k248m 1/1 Running 0 14m
From the skaffold dev log output I can see that every request is routed to the router-7f75f6f9df-c8mgp pod. Even with different browsers all requests end up at the exact same pod.
When I delete this pod there is even a slight downtime of the router microservice even though there is another pod running.
What could be the problem of this behavior?
minikube doesn't 'properly' support the LoadBalancer service type. It used to be commonplace to just use the NodePort or externalIP service type instead, however the official hello-minikube sample now states:
On cloud providers that support load balancers, an external IP address
would be provisioned to access the Service. On Minikube, the
LoadBalancer type makes the Service accessible through the minikube
service command
So effectively you should be able to use your minikube LoadBalancer service with: minikube service router
However there is a neat solution that was developed for bare-metal kubernetes clusters called metallb that may be able to help you test this in a better way on minikube.
You can install and configure it on minikube. E.g.
kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.8.1/manifests/metallb.yaml
Here are some blog posts where others have explained the setup and use of metallb with minikube for LoadBalancer support:
Blog Post 1
Blog Post 2
Here are the official docs.
Hope that helps!
Related
I have two Ubuntu VMs created using Oracle Virtual Box on my Windows 11 laptop. I setup a k8s cluster using kubeadm with these two Ubuntu VMs, one of them is a master node and an another one is a worker node. Both the nodes are running with Ubuntu 20.04.3 LTS and docker://20.10.7. I deployed my spring boot app into the k8s cluster and exposed a node port service for my spring boot app with port 30000, but I am not really sure on how to access my node port service on the internet outside my cluster. Could you please help me with this issue?
Following are the IP address of my nodes in k8s cluster - master [192.168.254.94] and worker [192.168.254.95]. I tried with the following urls but none of them worked
http://192.168.254.94:30000/swagger-ui.html
http://192.168.254.95:30000/swagger-ui.html
These above urls throwing message which says refused to connect
http://192.168.9.13:30000/swagger-ui.html
http://192.168.9.14:30000/swagger-ui.html
These above urls says that the site cannot be reached
Below is the content of my application.yaml which I used for deploying the spring boot app and its corresponding service
apiVersion: apps/v1
kind: Deployment
metadata:
name: dealer-engine
spec:
replicas: 1
selector:
matchLabels:
app: dealer-engine
template:
metadata:
labels:
app: dealer-engine
spec:
containers:
- name: dealer-engine
image: moviepopcorn/dealer_engine:0.0.1
ports:
- containerPort: 9090
env:
- name: MONGO_URL
value: mongodb://mongo-service:27017/mazda
imagePullPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: dealer-engine
spec:
type: NodePort
selector:
app: dealer-engine
ports:
- port: 9091
targetPort: 9090
nodePort: 30000
externalIPs:
- 10.0.0.12
I am a beginner in k8s so please help me on how I can access my node port service outside my k8s cluster.
I created a new simple Springboot application which returns "Hello world!!!" back to the user when the following endpoint "/helloWorld" is invoked. I deployed this spring boot app into my k8s cluster using the below yaml configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: moviepopcorn/hello_world:0.0.1
ports:
- containerPort: 9091
imagePullPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: hello-world
spec:
type: NodePort
selector:
app: hello-world
ports:
- port: 9091
targetPort: 9091
nodePort: 30001
After successful deployment, I am able to access the helloWorld endpoint using the following url <K8S_MASTER_NODE_IP>:<NODE_PORT (30001)>.
Thank you all for your answers and inputs. Very much appreciated.
Have you install any CNI plugin like flannel?
If yes, Check your CIDR setting here
kubectl get node k8s-master -o yaml | grep podCIDR:
kubectl get configmap -n kube-system kube-flannel-cfg -o yaml | grep '"Network":'
Basically yes, CNI is must. flannel is the most simple one.
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network
You must deploy a Container Network Interface (CNI) based Pod network add-on so that your Pods can communicate with each other. Cluster DNS (CoreDNS) will not start up before a network is installed.
Download cni pulgin on every server.
# download cni plugin
wget https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-amd64-v0.9.1.tgz
sudo mkdir -p /opt/cni/bin
sudo tar -xzvf cni-plugins-linux-amd64-v0.9.1.tgz -C /opt/cni/bin
Reset you cluster
kubeadm reset
Init your cluster with cidr setting must same as flannel config, default 10.244.0.0
kubeadm init --pod-network-cidr=10.244.0.0/16
Apply cni plugin.
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
I have a very simple Python app that works fine when I execute uvicorn main:app --reload. When I go to http://127.0.0.1:8000 on my machine, I'm able to interact with the API. (My app has no frontend, it is just an API built with FastAPI). However, I am trying to deploy this via Kubernetes, but am not sure how I can access/interact with my API.
Here is my deployment.yaml.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.16.1
ports:
- containerPort: 80
When I enter kubectl describe deployments my-deployment in the terminal, I get back a print out of the deployment, the namespace it is in, the pod template, a list of events, etc. So, I am pretty sure it is properly deployed.
How can I access the application? What would the url be? I have tried a variety of localhost + port combinations to no avail. I am new to kubernetes so I'm trying to understand how this works.
Update:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
namespace: default
spec:
selector:
matchLabels:
app: web
replicas: 2
template:
metadata:
labels:
app: web
spec:
containers:
- name: site
image: nginx:1.16.1
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: app-entrypoint
namespace: default
spec:
type: NodePort
selector:
app: web
ports:
- port: 80
targetPort: 80
nodePort: 30001
Again, when I use the k8s CLI, I'm able to see my deployment, yet when I hit localhost:30001, I get an Unable to connect message.
You have given containerPort: 80 but if your app listens on port 8080 change it to 8080.
There are different ways to access an application deployed on kubernetes
Port Forward using kubectl port-forward deployment/my-deployment 8080:8080
Creare a NodePort service and use http://<NODEIP>:<NODEPORT>
Create a LoadBalanceer service. This works only in supported cloud environment such as AWS, GKE etc.
Use ingress controller such nginx to expose the application.
By Default k8s application are exposed only within the cluster, if you want to access it from outside of the cluster then you can select any of the below options:
Expose Deployment as a node port service (kubectl expose deployment my-deployment --name=my-deployment-service --type=NodePort), describe the service and get the node port assigned to it (kubectl describe svc my-deployment-service). Then try http://<node-IP:node-port>/
For production grade cluster the best practice is to use LoadBalancer type (kubectl expose deployment my-deployment --name=my-deployment-service --type=LoadBalancer --target-port=8080) as part of this service you get an external IP which can be used to access your service http://EXTERNAL-IP:8080/
You can also see the details about the endpoint using kubectl get ep
Thanks,
I have been playing with Digital Ocean's new managed Kubernetes service. I have created a new cluster using Digital Ocean's dashboard and, seemingly, successfully deployed my yaml file (attached).
running in context kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
api-svc NodePort XX.XXX.XXX.XXX <none> 8080:30000/TCP 2h
kubernetes ClusterIP XX.XXX.X.X <none> 443/TCP 2h
My question is, how do I go exposing my service without a load balancer?
I have been able to do this locally using minikube. To get the cluster IP I run minikube ip and use port number 30000, as specified in my nodePort config, to reach the api-svc service.
From what I understand, Digital Ocean's managed service abstracts the master node away. So where would I find the public IP address to access my cluster?
Thank you in advance!
my yaml file for reference
apiVersion: v1
kind: Secret
metadata:
name: regcred
data:
.dockerconfigjson: <my base 64 key>
type: kubernetes.io/dockerconfigjson
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: api-deployment
labels:
app: api-deployment
spec:
replicas: 1
strategy: {}
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: <my-dockerhub-user>/api:latest
ports:
- containerPort: 8080
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
name: api-svc
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
nodePort: 30000
protocol: TCP
selector:
app: api
type: NodePort
You can hit any of your worker nodes' ip. Example http://worker-node-ip:30000/. You can get the worker nodes ip from the digitalocean dashboard or use doctl cli.
Slightly more detailed answer: DigitalOcean manages firewall rules for your NodePort services automatically, so once you expose the service, the NodePort is automatically open to public traffic from all worker nodes in your cluster. See docs
To find the public IP of any of your worker nodes, execute the following doctl commands:
# Get the first worker node from the first node-pool of your cluster
NODE_NAME=$(doctl kubernetes cluster node-pool get <cluster-name> <pool-name> -o json | jq -r '.[0].nodes[0].name')
WORKER_NODE_IP=$(doctl compute droplet get $NODE_NAME --template '{{.PublicIPv4}}')
Using "type: NodePort" presume use of node external address (any node) and may be unsustainable because nodes might be changed/upgraded.
I have the following pods hello-abc and hello-def.
And I want to send data from hello-abc to hello-def.
How would pod hello-abc know the IP address of hello-def?
And I want to do this programmatically.
What's the easiest way for hello-abc to find where hello-def?
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-abc-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: hello-abc
spec:
containers:
- name: hello-abc
image: hello-abc:v0.0.1
imagePullPolicy: Always
args: ["/hello-abc"]
ports:
- containerPort: 5000
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-def-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: hello-def
spec:
containers:
- name: hello-def
image: hello-def:v0.0.1
imagePullPolicy: Always
args: ["/hello-def"]
ports:
- containerPort: 5001
---
apiVersion: v1
kind: Service
metadata:
name: hello-abc-service
spec:
ports:
- port: 80
targetPort: 5000
protocol: TCP
selector:
app: hello-abc
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: hello-def-service
spec:
ports:
- port: 80
targetPort: 5001
protocol: TCP
selector:
app: hello-def
type: NodePort
Preface
Since you have defined a service that routes to each deployment, if you have deployed both services and deployments into the same namespace, you can in many modern kubernetes clusters take advantage of kube-dns and simply refer to the service by name.
Unfortunately if kube-dns is not configured in your cluster (although it is unlikely) you cannot refer to it by name.
You can read more about DNS records for services here
In addition Kubernetes features "Service Discovery" Which exposes the ports and ips of your services into any container which is deployed into the same namespace.
Solution
This means, to reach hello-def you can do so like this
curl http://hello-def-service:${HELLO_DEF_SERVICE_PORT}
based on Service Discovery https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables
Caveat: Its very possible that if the Service port changes, only pods that are created after the change in the same namespace will receive the new environment variables.
External Access
In addition, you can also reach this your service externally since you are using the NodePort feature, as long as your NodePort range is accessible from outside.
This would require you to access your service by node-ip:nodePort
You can find out the NodePort which was randomly assigned to your service with kubectl describe svc/hello-def-service
Ingress
To reach your service from outside you should implement an ingress service such as nginx-ingress
https://github.com/helm/charts/tree/master/stable/nginx-ingress
https://github.com/kubernetes/ingress-nginx
Sidecar
If your 2 services are tightly coupled, you can include both in the same pod using the Kubernetes Sidecar feature. In this case, both containers in the pod would share the same virtual network adapter and accessible via localhost:$port
https://kubernetes.io/docs/concepts/workloads/pods/pod/#uses-of-pods
Service Discovery
When a Pod is run on a Node, the kubelet adds a set of environment
variables for each active Service. It supports both Docker links
compatible variables (see makeLinkVariables) and simpler
{SVCNAME}_SERVICE_HOST and {SVCNAME}_SERVICE_PORT variables, where the
Service name is upper-cased and dashes are converted to underscores.
Read more about service discovery here:
https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables
You should be able to reach hello-def-service from pods in hello-abc via DNS as specified here: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services
However, kube-dns or CoreDNS has to be configured/installed in your k8s cluster before DNS records can be utilized in your cluster.
Specifically, you should be reach hello-def-service via the DNS record http://hello-def-service for the service running in the same namespace as hello-abc-service
And you should be able to reach hello-def-service running in another namespace ohter_namespace via the DNS record hello-def-service.other_namespace.svc.cluster.local.
If, for some reason, you do not have DNS add-ons installed in your cluster, you still can find the virtual IP of the hello-def-service via environment variables in hello-abc pods. As is documented here.
I created a pod with an api and web docker container in kuberneters using a yml file (see below).
apiVersion: v1
kind: Pod
metadata:
name: test
labels:
purpose: test
spec:
containers:
- name: api
image: gcr.io/test-1/api:latest
ports:
- containerPort: 8085
name: http
protocol: TCP
- name: web
image: gcr.io/test-1/web:latest
ports:
- containerPort: 5000
name: http
protocol: TCP
It show my pod is up and running
NAME READY STATUS RESTARTS AGE
test 2/2 Running 0 5m
but I don't know how to expose it from here.
it seems odd I would have to run kubectl run .... again as the pod is already running. It does not show a deployment though.
if I try something like
kubectl expose deployment test --type="NodePort"--port 80 --target-port 5000
it complains about deployments.extensions "test' not found. What is the cleanest way to deploy from here?
To expose a deployment to the public internet, you will want to use a Service. The service type LoadBalancer handles this nicely, as you can just use pod selectors in the yaml file.
So if my deployment.yaml looks like this:
kind: Deployment
apiVersion: apps/v1beta2
metadata:
name: test-dply
spec:
selector:
# Defines the selector that can be matched by a service for this
deployment
matchLabels:
app: test_pod
template:
metadata:
labels:
# Puts the label on the pod, this must match the matchLabels
selector
app: test_pod
spec:
# Our containers for training each model
containers:
- name: mycontainer
image: myimage
imagePullPolicy: Always
command: ["/bin/bash"]
ports:
- name: containerport
containerPort: 8085
Then the service that would link to it is:
kind: Service
apiVersion: v1
metadata:
# Name of our service
name: prodigy-service
spec:
# LoadBalancer type to allow external access to multiple ports
type: LoadBalancer
selector:
# Will deliver external traffic to the pod holding each of our containers
app: test_pod
ports:
- name: sentiment
protocol: TCP
port: 80
targetPort: containerport
You can deploy these two items by using kubectl create -f /path/to/dply.yaml and kubectl create -f /path/to/svc.yaml. Quick note: The service will allocate a public IP address, which you can find using kubectl get services with the following output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
carbon-relay ClusterIP *.*.*.* <none> 2003/TCP 78d
comparison-api LoadBalancer *.*.*.* *.*.*.* 80:30920/TCP 15d
It can take several minutes to allocate the ip, just a forewarning. But the LoadBalancer's ip is fixed, and you can delete the pod that it points to and re-spin it without consequence. So if I want to edit my test.dply, I can without worrying about my service being impacted. You should rarely have to spin down services
You have created a pod, not a deployment.
Then you have exposed a deployment (and not your pod).
Try:
kubectl expose pod test --type=NodePort --port=80 --target-port=5000
kubectl expose pod test --type=LoadBalancer --port=XX --target-port=XXXX
If you already have pod and service running, you can create an ingress for the service you want to expose to the internet.
If you want to create it through console, Google Cloud provides really easy way to create an ingress from an existing service. Go to Services and Ingress tab, select the service, click on create ingress, fill the name and other mandatory fields.
or you can create using yaml file
apiVersion: "networking.k8s.io/v1"
kind: "Ingress"
metadata:
name: "example-ingress"
namespace: "default"
spec:
defaultBackend:
service:
name: "example-service"
port:
number: 8123
status:
loadBalancer: {}