I want to expose my kubernetes cluster with minikube.
consider my tree
.
├── deployment.yaml
├── Dockerfile
├── server.js
└── service.yaml
I build my docker image locally and am able to run all pods via
kubectl create -f deployment.yaml
kubectl create -f service.yaml
. However when I run
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2h
nodeapp LoadBalancer 10.110.106.83 <pending> 80:32711/TCP 9m
There is no external ip to be able to connect to the cluster. Tried to expose one pod but the the external Ip stays none. Why Is there no external ip?
$ cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nodeapp
labels:
app: nodeapp
spec:
replicas: 2
selector:
matchLabels:
app: nodeapp
template:
metadata:
labels:
app: nodeapp
spec:
containers:
- name: hello-node
image: hello-node:v2
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
and
cat service.yaml
kind: Service
apiVersion: v1
metadata:
name: nodeapp
spec:
selector:
app: nodeapp
ports:
- name: http
port: 80
targetPort: 3000
protocol: TCP
type: LoadBalancer
$ cat server.js
var http = require('http');
var handleRequest = function(request, response) {
console.log('Received request for URL: ' + request.url);
response.writeHead(200);
response.end('Hello User');
};
var www = http.createServer(handleRequest);
According to the K8S documentation here. So, type=LoadBalancer can be used on AWS, GCP and other supported Clouds, not on Minikube.
On cloud providers which support external load balancers, setting the type field to LoadBalancer will provision a load balancer for your Service.
Specify type as NodePort as mentioned here and the service will be exposed on a port on the Minikube. Then the service can be accessed by using url from the host OS.
minikube service nodeapp --url
A load balancer type service can be achieved in minikube using the metallb project https://github.com/google/metallb
This allows you to use external ip offline and in minikube and not only with a cloud provider.
Good luck!
If you run the following command:
kubectl expose deployment nodeapp --type=NodePort
Then run:
kubectl get services
It should show you the service and what port it's exposed on.
You can get your Minikube IP using:
curl $(minikube service nodeapp --url)
Related
I am trying to run an application locally on k8s but I am not able to reach it.
here is my deloyment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: listings
labels:
app: listings
spec:
replicas: 2
selector:
matchLabels:
app: listings
template:
metadata:
labels:
app: listings
spec:
containers:
- image: mydockerhub/listings:latest
name: listings
envFrom:
- secretRef:
name: listings-secret
- configMapRef:
name: listings-config
ports:
- containerPort: 8000
name: django-port
and it is my service
apiVersion: v1
kind: Service
metadata:
name: listings
labels:
app: listings
spec:
type: NodePort
selector:
app: listings
ports:
- name: http
port: 8000
targetPort: 8000
nodePort: 30036
protocol: TCP
At this stage, I don't want to use other methods like ingress or ClusterIP, or load balancer. I want to make nodePort work because I am trying to learn.
When I run kubectl get svc -o wide I see
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
listings NodePort 10.107.77.231 <none> 8000:30036/TCP 28s app=listings
When I run kubectl get node -o wide I see
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
minikube Ready control-plane,master 85d v1.23.3 192.168.49.2 <none> Ubuntu 20.04.2 LTS 5.10.16.3-microsoft-standard-WSL2 docker://20.10.12
and when I run minikube ip it shows 192.168.49.2
I try to open http://192.168.49.2:30036/health it is not opening This site can’t be reached
How should expose my application externally?
note that I have created the required configmap and secret objects. also note that this is a simple django restful application that if you hit the /health endpoint, it returns success. and that's it. so there is no problem with the application
That is because your local and minikube are not in the same network segment,
you must do something more to access minikube service on windows.
First
$ minikube service list
That will show your service detail which include name, url, nodePort, targetPort.
Then
$ minikube service --url listings
It will open a port to listen on your windows machine that can forward the traffic to minikube node port.
Or you can use command kubectl port-forward to expose service on host port, like:
kubectl port-forward --address 0.0.0.0 -n default service/listings 30036:8000
Then try with http://localhost:30036/health
I have two Ubuntu VMs created using Oracle Virtual Box on my Windows 11 laptop. I setup a k8s cluster using kubeadm with these two Ubuntu VMs, one of them is a master node and an another one is a worker node. Both the nodes are running with Ubuntu 20.04.3 LTS and docker://20.10.7. I deployed my spring boot app into the k8s cluster and exposed a node port service for my spring boot app with port 30000, but I am not really sure on how to access my node port service on the internet outside my cluster. Could you please help me with this issue?
Following are the IP address of my nodes in k8s cluster - master [192.168.254.94] and worker [192.168.254.95]. I tried with the following urls but none of them worked
http://192.168.254.94:30000/swagger-ui.html
http://192.168.254.95:30000/swagger-ui.html
These above urls throwing message which says refused to connect
http://192.168.9.13:30000/swagger-ui.html
http://192.168.9.14:30000/swagger-ui.html
These above urls says that the site cannot be reached
Below is the content of my application.yaml which I used for deploying the spring boot app and its corresponding service
apiVersion: apps/v1
kind: Deployment
metadata:
name: dealer-engine
spec:
replicas: 1
selector:
matchLabels:
app: dealer-engine
template:
metadata:
labels:
app: dealer-engine
spec:
containers:
- name: dealer-engine
image: moviepopcorn/dealer_engine:0.0.1
ports:
- containerPort: 9090
env:
- name: MONGO_URL
value: mongodb://mongo-service:27017/mazda
imagePullPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: dealer-engine
spec:
type: NodePort
selector:
app: dealer-engine
ports:
- port: 9091
targetPort: 9090
nodePort: 30000
externalIPs:
- 10.0.0.12
I am a beginner in k8s so please help me on how I can access my node port service outside my k8s cluster.
I created a new simple Springboot application which returns "Hello world!!!" back to the user when the following endpoint "/helloWorld" is invoked. I deployed this spring boot app into my k8s cluster using the below yaml configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: moviepopcorn/hello_world:0.0.1
ports:
- containerPort: 9091
imagePullPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: hello-world
spec:
type: NodePort
selector:
app: hello-world
ports:
- port: 9091
targetPort: 9091
nodePort: 30001
After successful deployment, I am able to access the helloWorld endpoint using the following url <K8S_MASTER_NODE_IP>:<NODE_PORT (30001)>.
Thank you all for your answers and inputs. Very much appreciated.
Have you install any CNI plugin like flannel?
If yes, Check your CIDR setting here
kubectl get node k8s-master -o yaml | grep podCIDR:
kubectl get configmap -n kube-system kube-flannel-cfg -o yaml | grep '"Network":'
Basically yes, CNI is must. flannel is the most simple one.
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network
You must deploy a Container Network Interface (CNI) based Pod network add-on so that your Pods can communicate with each other. Cluster DNS (CoreDNS) will not start up before a network is installed.
Download cni pulgin on every server.
# download cni plugin
wget https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-amd64-v0.9.1.tgz
sudo mkdir -p /opt/cni/bin
sudo tar -xzvf cni-plugins-linux-amd64-v0.9.1.tgz -C /opt/cni/bin
Reset you cluster
kubeadm reset
Init your cluster with cidr setting must same as flannel config, default 10.244.0.0
kubeadm init --pod-network-cidr=10.244.0.0/16
Apply cni plugin.
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
I follow below steps to deploy my custom jar
1)- I created on docker image through the below docker file
FROM openjdk:8-jre-alpine3.9
LABEL MAINTAINER DINESH
LABEL version="1.0"
LABEL description="First image with Dockerfile & DINESH."
RUN mkdir /app
COPY LoginService-0.0.1-SNAPSHOT.jar /app
WORKDIR /app
CMD ["java", "-jar", "LoginService-0.0.1-SNAPSHOT.jar"]
2)- Deploy on kubernetes with the below deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: load-balancer-example
name: hello-world
spec:
replicas: 2
selector:
matchLabels:
app.kubernetes.io/name: load-balancer-example
template:
metadata:
labels:
app.kubernetes.io/name: load-balancer-example
spec:
containers:
- image: localhost:5000/my-image:latest
name: hello-world
ports:
- containerPort: 8000
3)- Exposed as service with the below command
minikube tunnel
kubectl expose deployment hello-world --type=LoadBalancer --name=my-service --port=8000 --target-port=8000
4)- Output of the kubectl describe svc my-service
<strong>
Name: my-service<br>
Namespace: default<br>
Labels: app.kubernetes.io/name=load-balancer-example<br>
Annotations: <none><br>
Selector: app.kubernetes.io/name=load-balancer-example<br>
Type: LoadBalancer<br>
IP: 10.96.142.93<br>
LoadBalancer Ingress: 10.96.142.93<br>
Port: <unset> 8000/TCP<br>
TargetPort: 8000/TCP<br>
NodePort: <unset> 31284/TCP<br>
Endpoints: 172.18.0.7:8000,172.18.0.8:8000<br>
Session Affinity: None<br>
External Traffic Policy: Cluster<br>
Events: <none><br> </strong>
POD is in running state
I m trying to access the pod using "10.96.142.93" as given in the http://10.96.142.93:8090 my loginservice is started on 8090 PORT, BUT i am unable to access pod plz help
Try to access on nodeport localhost:31284 and please use service type as NodePort instead of LoadBalancer because loadbalancer service type mostly used on cloud level.
and Use Target-Port as same port you configured on pod definition yaml file.
so your url should be http://10.96.142.93:8000
or another way you can be by using port-forward
kubectl port-forward pod_name 80:8000 this will map pod port to localhost port
Then access it on http://localhost:80
The Kubernetes loadbalancer type service is provided only in places such as aws and gcp. Use metallb for use in an on-premises environment.
If not, try modifying the service type to Nodeport and accessing http://localhost:31284
I have been playing with Digital Ocean's new managed Kubernetes service. I have created a new cluster using Digital Ocean's dashboard and, seemingly, successfully deployed my yaml file (attached).
running in context kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
api-svc NodePort XX.XXX.XXX.XXX <none> 8080:30000/TCP 2h
kubernetes ClusterIP XX.XXX.X.X <none> 443/TCP 2h
My question is, how do I go exposing my service without a load balancer?
I have been able to do this locally using minikube. To get the cluster IP I run minikube ip and use port number 30000, as specified in my nodePort config, to reach the api-svc service.
From what I understand, Digital Ocean's managed service abstracts the master node away. So where would I find the public IP address to access my cluster?
Thank you in advance!
my yaml file for reference
apiVersion: v1
kind: Secret
metadata:
name: regcred
data:
.dockerconfigjson: <my base 64 key>
type: kubernetes.io/dockerconfigjson
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: api-deployment
labels:
app: api-deployment
spec:
replicas: 1
strategy: {}
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: <my-dockerhub-user>/api:latest
ports:
- containerPort: 8080
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
name: api-svc
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
nodePort: 30000
protocol: TCP
selector:
app: api
type: NodePort
You can hit any of your worker nodes' ip. Example http://worker-node-ip:30000/. You can get the worker nodes ip from the digitalocean dashboard or use doctl cli.
Slightly more detailed answer: DigitalOcean manages firewall rules for your NodePort services automatically, so once you expose the service, the NodePort is automatically open to public traffic from all worker nodes in your cluster. See docs
To find the public IP of any of your worker nodes, execute the following doctl commands:
# Get the first worker node from the first node-pool of your cluster
NODE_NAME=$(doctl kubernetes cluster node-pool get <cluster-name> <pool-name> -o json | jq -r '.[0].nodes[0].name')
WORKER_NODE_IP=$(doctl compute droplet get $NODE_NAME --template '{{.PublicIPv4}}')
Using "type: NodePort" presume use of node external address (any node) and may be unsustainable because nodes might be changed/upgraded.
I created a pod with an api and web docker container in kuberneters using a yml file (see below).
apiVersion: v1
kind: Pod
metadata:
name: test
labels:
purpose: test
spec:
containers:
- name: api
image: gcr.io/test-1/api:latest
ports:
- containerPort: 8085
name: http
protocol: TCP
- name: web
image: gcr.io/test-1/web:latest
ports:
- containerPort: 5000
name: http
protocol: TCP
It show my pod is up and running
NAME READY STATUS RESTARTS AGE
test 2/2 Running 0 5m
but I don't know how to expose it from here.
it seems odd I would have to run kubectl run .... again as the pod is already running. It does not show a deployment though.
if I try something like
kubectl expose deployment test --type="NodePort"--port 80 --target-port 5000
it complains about deployments.extensions "test' not found. What is the cleanest way to deploy from here?
To expose a deployment to the public internet, you will want to use a Service. The service type LoadBalancer handles this nicely, as you can just use pod selectors in the yaml file.
So if my deployment.yaml looks like this:
kind: Deployment
apiVersion: apps/v1beta2
metadata:
name: test-dply
spec:
selector:
# Defines the selector that can be matched by a service for this
deployment
matchLabels:
app: test_pod
template:
metadata:
labels:
# Puts the label on the pod, this must match the matchLabels
selector
app: test_pod
spec:
# Our containers for training each model
containers:
- name: mycontainer
image: myimage
imagePullPolicy: Always
command: ["/bin/bash"]
ports:
- name: containerport
containerPort: 8085
Then the service that would link to it is:
kind: Service
apiVersion: v1
metadata:
# Name of our service
name: prodigy-service
spec:
# LoadBalancer type to allow external access to multiple ports
type: LoadBalancer
selector:
# Will deliver external traffic to the pod holding each of our containers
app: test_pod
ports:
- name: sentiment
protocol: TCP
port: 80
targetPort: containerport
You can deploy these two items by using kubectl create -f /path/to/dply.yaml and kubectl create -f /path/to/svc.yaml. Quick note: The service will allocate a public IP address, which you can find using kubectl get services with the following output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
carbon-relay ClusterIP *.*.*.* <none> 2003/TCP 78d
comparison-api LoadBalancer *.*.*.* *.*.*.* 80:30920/TCP 15d
It can take several minutes to allocate the ip, just a forewarning. But the LoadBalancer's ip is fixed, and you can delete the pod that it points to and re-spin it without consequence. So if I want to edit my test.dply, I can without worrying about my service being impacted. You should rarely have to spin down services
You have created a pod, not a deployment.
Then you have exposed a deployment (and not your pod).
Try:
kubectl expose pod test --type=NodePort --port=80 --target-port=5000
kubectl expose pod test --type=LoadBalancer --port=XX --target-port=XXXX
If you already have pod and service running, you can create an ingress for the service you want to expose to the internet.
If you want to create it through console, Google Cloud provides really easy way to create an ingress from an existing service. Go to Services and Ingress tab, select the service, click on create ingress, fill the name and other mandatory fields.
or you can create using yaml file
apiVersion: "networking.k8s.io/v1"
kind: "Ingress"
metadata:
name: "example-ingress"
namespace: "default"
spec:
defaultBackend:
service:
name: "example-service"
port:
number: 8123
status:
loadBalancer: {}