kubernetes version: 1.5.2
os: centos 7
etcd version: 3.4.0
First, I create an etcd pod, the etcd dockerfile and etcd pod YAML file like this:
etcd dockerfile:
FROM alpine
COPY . /usr/bin
WORKDIR /usr/bin
CMD etcd --listen-client-urls http://0.0.0.0:2379 --advertise-client-urls http://0.0.0.0:2379
EXPOSE 2379
pod yaml file
apiVersion: v1
kind: Pod
metadata:
name: etcd
namespace: storageplatform
labels:
app: etcd
spec:
containers:
- name: etcd
image: "karldoenitz/etcd:3.4.0"
ports:
- containerPort: 2379
hostPort: 12379
After created the docker image and push to dockerhub, I run the command kubectl apply -f etcd.yaml to create the etcd pod.
The ip of etcd pod is 10.254.140.117, I ran the command use ETCDCTL_API=3 etcdctl --endpoints=175.24.47.64:12379 put 1 1 and got OK.
My service yaml:
apiVersion: v1
kind: Service
metadata:
name: storageservice
namespace: storageplatform
spec:
type: NodePort
ports:
- port: 12379
targetPort: 12379
nodePort: 32379
selector:
app: etcd
apply the yaml file to create the service.run the command kubectl get services -n storageplatform, I got these infomation.
NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
storageplatform storageservice 10.254.140.117 <nodes> 12379:32379/TCP 51s
After all, I run the command
ETCDCTL_API=3 etcdctl --endpoints=10.254.140.117:32379 get 1
or
ETCDCTL_API=3 etcdctl --endpoints={host-ip}:32379 get 1
I got Error: context deadline exceeded.
What's the matter? How to make the service useful?
You defined a service that is available inside the kubernetes network using the service name/ip (10.254.140.117 ) and the service port (12379) and which is available on ALL nodes of the kubernetes cluster even outside the kubernetes network with the node port (32379)
You need to fix the service in order to map to the correct container port: targetPort must match the pod containerPort (and the port in the dockerfile).
If the error Error: context deadline exceeded persits, it hints at a communication problem. This can be explained when using the internal service ip with the external node port (your first get 1). For the node port (your second command) I assume that either the etcd pod is not running properly, or the port is firewalled on the node.
Change the service to refer to containerPort instead of hostPort
apiVersion: v1
kind: Service
metadata:
name: storageservice
namespace: storageplatform
spec:
type: NodePort
ports:
- port: 2379
targetPort: 2379
nodePort: 32379
selector:
app: etcd
Related
I am getting ERR_CONNECTION_TIMED_OUT when trying to access minikube service in localhost.
apiVersion: apps/v1
kind: Deployment
metadata:
name: identityserver
spec:
selector:
matchLabels:
app: identityserver
template:
metadata:
labels:
app: identityserver
spec:
containers:
- name: identityserver
image: identityserver:0
ports:
- containerPort: 5001
imagePullPolicy: "Never"
I have created service as following.
apiVersion: v1
kind: Service
metadata:
name: identityserver
spec:
type: NodePort
selector:
app: identityserver
ports:
- port: 5001
nodePort: 30002
I am trying to load in my local browser using following command. But it is not getting accessible in localhost. Internal kubernetes apps are able to communicate with service but not externally.
minikube service identityserver
I tried making type as clusterip and then it worked with port forwarding and only nodeport is having issue accessing.
kubectl port-forward service/identityserver 18080:5001 --address 0.0.0.0
This seems to be an issue with the Docker driver. I was able to run this with VirtualBox driver.
So I just had to start using VirtualBox driver (Even though virtualization was enabled in my machine it was giving an error. so had to append the --no-vtx-check flag, you can skip that if not facing an error without that flag)
minikube start --driver=virtualbox --no-vtx-check
There are several ways of trying minikube on Windows + docker:
Docker Desktop app (with Enable Kubernetes option)
Docker Desktop app (without enabling Kubernetes option) and installing minikube to wsl2
No Docker Desktop at all, installing docker and minikube in wsl2
Let's test it with the link you gave in comments - Set up Ingress on Minikube with the NGINX Ingress Controller.
Docker Desktop v.20.10.12 (with Enable Kubernetes option v.1.22.5), Win10, wsl2 backend.
Enable Kubernetes in Docker Desktop.
Check if ingress-controller is installed:
$ kubectl get pods -n ingress-nginx
The output should be similar to:
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-g9g49 0/1 Completed 0 11m
ingress-nginx-admission-patch-rqp78 0/1 Completed 1 11m
ingress-nginx-controller-59b45fb494-26npt 1/1 Running 0 11m
Create a Deployment using the following command:
kubectl create deployment web --image=gcr.io/google-samples/hello-app:1.0
Expose the Deployment:
kubectl expose deployment web --type=NodePort --port=8080
Create example-ingress.yaml from the following file:
$ kubectl apply -f example-ingress.yaml
$ cat example-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
ingressClassName: nginx # this line is essential!
rules:
- host: hello-world.info
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web
port:
number: 8080
Verify the IP address is set:
NAME CLASS HOSTS ADDRESS PORTS AGE
example-ingress <none> hello-world.info localhost 80 38s
Add the following line to the bottom of the C:\Windows\System32\drivers\etc\hosts file on your computer (you will need administrator access):
127.0.0.1 hello-world.info
DONE. Open hello-world.info in a browser.
How to access the NodePort service? In C:\Windows\System32\drivers\etc\hosts find these lines:
# Added by Docker Desktop
192.168.1.179 host.docker.internal
192.168.1.179 gateway.docker.internal
Use this IP and node port: curl 192.168.1.179:portNumber
I have two Ubuntu VMs created using Oracle Virtual Box on my Windows 11 laptop. I setup a k8s cluster using kubeadm with these two Ubuntu VMs, one of them is a master node and an another one is a worker node. Both the nodes are running with Ubuntu 20.04.3 LTS and docker://20.10.7. I deployed my spring boot app into the k8s cluster and exposed a node port service for my spring boot app with port 30000, but I am not really sure on how to access my node port service on the internet outside my cluster. Could you please help me with this issue?
Following are the IP address of my nodes in k8s cluster - master [192.168.254.94] and worker [192.168.254.95]. I tried with the following urls but none of them worked
http://192.168.254.94:30000/swagger-ui.html
http://192.168.254.95:30000/swagger-ui.html
These above urls throwing message which says refused to connect
http://192.168.9.13:30000/swagger-ui.html
http://192.168.9.14:30000/swagger-ui.html
These above urls says that the site cannot be reached
Below is the content of my application.yaml which I used for deploying the spring boot app and its corresponding service
apiVersion: apps/v1
kind: Deployment
metadata:
name: dealer-engine
spec:
replicas: 1
selector:
matchLabels:
app: dealer-engine
template:
metadata:
labels:
app: dealer-engine
spec:
containers:
- name: dealer-engine
image: moviepopcorn/dealer_engine:0.0.1
ports:
- containerPort: 9090
env:
- name: MONGO_URL
value: mongodb://mongo-service:27017/mazda
imagePullPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: dealer-engine
spec:
type: NodePort
selector:
app: dealer-engine
ports:
- port: 9091
targetPort: 9090
nodePort: 30000
externalIPs:
- 10.0.0.12
I am a beginner in k8s so please help me on how I can access my node port service outside my k8s cluster.
I created a new simple Springboot application which returns "Hello world!!!" back to the user when the following endpoint "/helloWorld" is invoked. I deployed this spring boot app into my k8s cluster using the below yaml configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: moviepopcorn/hello_world:0.0.1
ports:
- containerPort: 9091
imagePullPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: hello-world
spec:
type: NodePort
selector:
app: hello-world
ports:
- port: 9091
targetPort: 9091
nodePort: 30001
After successful deployment, I am able to access the helloWorld endpoint using the following url <K8S_MASTER_NODE_IP>:<NODE_PORT (30001)>.
Thank you all for your answers and inputs. Very much appreciated.
Have you install any CNI plugin like flannel?
If yes, Check your CIDR setting here
kubectl get node k8s-master -o yaml | grep podCIDR:
kubectl get configmap -n kube-system kube-flannel-cfg -o yaml | grep '"Network":'
Basically yes, CNI is must. flannel is the most simple one.
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network
You must deploy a Container Network Interface (CNI) based Pod network add-on so that your Pods can communicate with each other. Cluster DNS (CoreDNS) will not start up before a network is installed.
Download cni pulgin on every server.
# download cni plugin
wget https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-amd64-v0.9.1.tgz
sudo mkdir -p /opt/cni/bin
sudo tar -xzvf cni-plugins-linux-amd64-v0.9.1.tgz -C /opt/cni/bin
Reset you cluster
kubeadm reset
Init your cluster with cidr setting must same as flannel config, default 10.244.0.0
kubeadm init --pod-network-cidr=10.244.0.0/16
Apply cni plugin.
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
I have been playing with Digital Ocean's new managed Kubernetes service. I have created a new cluster using Digital Ocean's dashboard and, seemingly, successfully deployed my yaml file (attached).
running in context kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
api-svc NodePort XX.XXX.XXX.XXX <none> 8080:30000/TCP 2h
kubernetes ClusterIP XX.XXX.X.X <none> 443/TCP 2h
My question is, how do I go exposing my service without a load balancer?
I have been able to do this locally using minikube. To get the cluster IP I run minikube ip and use port number 30000, as specified in my nodePort config, to reach the api-svc service.
From what I understand, Digital Ocean's managed service abstracts the master node away. So where would I find the public IP address to access my cluster?
Thank you in advance!
my yaml file for reference
apiVersion: v1
kind: Secret
metadata:
name: regcred
data:
.dockerconfigjson: <my base 64 key>
type: kubernetes.io/dockerconfigjson
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: api-deployment
labels:
app: api-deployment
spec:
replicas: 1
strategy: {}
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: <my-dockerhub-user>/api:latest
ports:
- containerPort: 8080
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
name: api-svc
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
nodePort: 30000
protocol: TCP
selector:
app: api
type: NodePort
You can hit any of your worker nodes' ip. Example http://worker-node-ip:30000/. You can get the worker nodes ip from the digitalocean dashboard or use doctl cli.
Slightly more detailed answer: DigitalOcean manages firewall rules for your NodePort services automatically, so once you expose the service, the NodePort is automatically open to public traffic from all worker nodes in your cluster. See docs
To find the public IP of any of your worker nodes, execute the following doctl commands:
# Get the first worker node from the first node-pool of your cluster
NODE_NAME=$(doctl kubernetes cluster node-pool get <cluster-name> <pool-name> -o json | jq -r '.[0].nodes[0].name')
WORKER_NODE_IP=$(doctl compute droplet get $NODE_NAME --template '{{.PublicIPv4}}')
Using "type: NodePort" presume use of node external address (any node) and may be unsustainable because nodes might be changed/upgraded.
I want to expose my kubernetes cluster with minikube.
consider my tree
.
├── deployment.yaml
├── Dockerfile
├── server.js
└── service.yaml
I build my docker image locally and am able to run all pods via
kubectl create -f deployment.yaml
kubectl create -f service.yaml
. However when I run
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2h
nodeapp LoadBalancer 10.110.106.83 <pending> 80:32711/TCP 9m
There is no external ip to be able to connect to the cluster. Tried to expose one pod but the the external Ip stays none. Why Is there no external ip?
$ cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nodeapp
labels:
app: nodeapp
spec:
replicas: 2
selector:
matchLabels:
app: nodeapp
template:
metadata:
labels:
app: nodeapp
spec:
containers:
- name: hello-node
image: hello-node:v2
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
and
cat service.yaml
kind: Service
apiVersion: v1
metadata:
name: nodeapp
spec:
selector:
app: nodeapp
ports:
- name: http
port: 80
targetPort: 3000
protocol: TCP
type: LoadBalancer
$ cat server.js
var http = require('http');
var handleRequest = function(request, response) {
console.log('Received request for URL: ' + request.url);
response.writeHead(200);
response.end('Hello User');
};
var www = http.createServer(handleRequest);
According to the K8S documentation here. So, type=LoadBalancer can be used on AWS, GCP and other supported Clouds, not on Minikube.
On cloud providers which support external load balancers, setting the type field to LoadBalancer will provision a load balancer for your Service.
Specify type as NodePort as mentioned here and the service will be exposed on a port on the Minikube. Then the service can be accessed by using url from the host OS.
minikube service nodeapp --url
A load balancer type service can be achieved in minikube using the metallb project https://github.com/google/metallb
This allows you to use external ip offline and in minikube and not only with a cloud provider.
Good luck!
If you run the following command:
kubectl expose deployment nodeapp --type=NodePort
Then run:
kubectl get services
It should show you the service and what port it's exposed on.
You can get your Minikube IP using:
curl $(minikube service nodeapp --url)
I created a pod with an api and web docker container in kuberneters using a yml file (see below).
apiVersion: v1
kind: Pod
metadata:
name: test
labels:
purpose: test
spec:
containers:
- name: api
image: gcr.io/test-1/api:latest
ports:
- containerPort: 8085
name: http
protocol: TCP
- name: web
image: gcr.io/test-1/web:latest
ports:
- containerPort: 5000
name: http
protocol: TCP
It show my pod is up and running
NAME READY STATUS RESTARTS AGE
test 2/2 Running 0 5m
but I don't know how to expose it from here.
it seems odd I would have to run kubectl run .... again as the pod is already running. It does not show a deployment though.
if I try something like
kubectl expose deployment test --type="NodePort"--port 80 --target-port 5000
it complains about deployments.extensions "test' not found. What is the cleanest way to deploy from here?
To expose a deployment to the public internet, you will want to use a Service. The service type LoadBalancer handles this nicely, as you can just use pod selectors in the yaml file.
So if my deployment.yaml looks like this:
kind: Deployment
apiVersion: apps/v1beta2
metadata:
name: test-dply
spec:
selector:
# Defines the selector that can be matched by a service for this
deployment
matchLabels:
app: test_pod
template:
metadata:
labels:
# Puts the label on the pod, this must match the matchLabels
selector
app: test_pod
spec:
# Our containers for training each model
containers:
- name: mycontainer
image: myimage
imagePullPolicy: Always
command: ["/bin/bash"]
ports:
- name: containerport
containerPort: 8085
Then the service that would link to it is:
kind: Service
apiVersion: v1
metadata:
# Name of our service
name: prodigy-service
spec:
# LoadBalancer type to allow external access to multiple ports
type: LoadBalancer
selector:
# Will deliver external traffic to the pod holding each of our containers
app: test_pod
ports:
- name: sentiment
protocol: TCP
port: 80
targetPort: containerport
You can deploy these two items by using kubectl create -f /path/to/dply.yaml and kubectl create -f /path/to/svc.yaml. Quick note: The service will allocate a public IP address, which you can find using kubectl get services with the following output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
carbon-relay ClusterIP *.*.*.* <none> 2003/TCP 78d
comparison-api LoadBalancer *.*.*.* *.*.*.* 80:30920/TCP 15d
It can take several minutes to allocate the ip, just a forewarning. But the LoadBalancer's ip is fixed, and you can delete the pod that it points to and re-spin it without consequence. So if I want to edit my test.dply, I can without worrying about my service being impacted. You should rarely have to spin down services
You have created a pod, not a deployment.
Then you have exposed a deployment (and not your pod).
Try:
kubectl expose pod test --type=NodePort --port=80 --target-port=5000
kubectl expose pod test --type=LoadBalancer --port=XX --target-port=XXXX
If you already have pod and service running, you can create an ingress for the service you want to expose to the internet.
If you want to create it through console, Google Cloud provides really easy way to create an ingress from an existing service. Go to Services and Ingress tab, select the service, click on create ingress, fill the name and other mandatory fields.
or you can create using yaml file
apiVersion: "networking.k8s.io/v1"
kind: "Ingress"
metadata:
name: "example-ingress"
namespace: "default"
spec:
defaultBackend:
service:
name: "example-service"
port:
number: 8123
status:
loadBalancer: {}