I have configured a Kubernetes cluster using kubeadm, by creating 3 Virtualbox nodes, each node running CentOS (master, node1, node2). Each virtualbox virtual machine is configured using 'Bridge' networking.
As a result, I have the following setup:
Master node 'master.k8s' running at 192.168.19.87 (virtualbox)
Worker node 1 'node1.k8s' running at 192.168.19.88 (virtualbox)
Worker node 2 'node2.k8s' running at 192.168.19.89 (virtualbox
Now I would like to access services running in the cluster from my local machine (the physical machine where the virtualbox nodes are running).
Running kubectl cluster-info I see the following output:
Kubernetes master is running at https://192.168.19.87:6443
KubeDNS is running at ...
As an example, let's say I deploy the dashboard inside my cluster, how do I open the dashboard UI using a browser running on my physical machine?
The traditional way is to use kubectl proxy or a Load Balancer, but since you are in a development machine a NodePort can be used to publish the applications, as a Load balancer is not available in VirtualBox.
The following example deploys 3 replicas of an echo server running nginx and publishes the http port using a NodePort:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: my-echo
image: gcr.io/google_containers/echoserver:1.8
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service-np
labels:
name: nginx-service-np
spec:
type: NodePort
ports:
- port: 8082 # Cluster IP http://10.109.199.234:8082
targetPort: 8080 # Application port
nodePort: 30000 # Example (EXTERNAL-IP VirtualBox IPs) http://192.168.50.11:30000/ http://192.168.50.12:30000/ http://192.168.50.13:30000/
protocol: TCP
name: http
selector:
app: nginx
You can access the servers using any of the VirtualBox IPs, like
http://192.168.50.11:30000 or http://192.168.50.12:30000 or http://192.168.50.13:30000
See a full example at Building a Kubernetes Cluster with Vagrant and Ansible (without Minikube).
The traditional way of getting access to the kubernetes dashboard is documented in their readme and is to use kubectl proxy.
One should not have to ssh into the cluster to access any kubernetes service, since that would defeat the purpose of having a cluster, and would absolutely shoot a hole in the cluster's security model. Any ssh to Nodes should be reserved for "in case of emergency, break glass" situations.
More generally speaking, a well configured Ingress controller will surface services en-masse and also has the very pleasing side-effect of meaning your local cluster will operate exactly the same as your "for real" cluster, without any underhanded ssh-ery rules required
Related
In my Kubernetes cluster setup, I have a Greenplum DB cluster (one master and 8 segments nodes) with a LoadBlanacer service. Please refer to the below service config.
apiVersion: v1
kind: Service
metadata:
labels:
app: greenplum
greenplum-cluster: greenplum-cluster
name: greenplum
spec:
clusterIP: 10.101.251.127
clusterIPs:
- 10.101.251.127
externalIPs:
- 11.4.8.141
externalTrafficPolicy: Cluster
healthCheckNodePort: 32572
ports:
- name: psql
nodePort: 32198
port: 5432
protocol: TCP
targetPort: 5432
selector:
statefulset.kubernetes.io/pod-name: master-0
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}
However, a few hours after deployment, the externalTrafficPolicy value is set to Local instead of Cluster, which made the service inaccessible via the defined external IP. Is there any reason for this? It changes automatically even after I edit the service configuration.
Or is there any other way to access this Greenplum DB (TCP 5432) such as ingress?
Can you provide more details on your Kubernetes cluster setup? Is this bare-metal, cloud managed, kubeadm?
Would be useful to see the logs when you try to connect to the DB, also some traces?
The only difference between externalTrafficPolicy Local or Cluster is that Cluster will balance the traffic between all pods in the cluster having a nice distribution between your pods. Local will only distribute the traffic between the pods already deployed on the node where the request land from the LB.
Additionally using Local will preserve the clientIP of the request but not the Cluster option.
There is a nice video who really explain this on detail here.
Excuse my relative networking ignorance, but I've read a lot of docs and still have trouble understanding this (perhaps due to lack of background in networks).
Given this Dockerfile:
from node:lts-slim
RUN mkdir /code
COPY package.json /code/
WORKDIR /code
RUN npm install
COPY server.js /code/
EXPOSE 3000
CMD ["node", "server.js"]
...this deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
spec:
replicas: 2
selector:
matchLabels:
app: web-pod
template:
metadata:
labels:
app: web-pod
spec:
containers:
- name: web
image: kahunacohen/hello-k8s
ports:
- containerPort: 3000
protocol: TCP
and this service:
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
type: NodePort
selector:
app: web-pod
ports:
- port: 80
targetPort: 3000
protocol: TCP
name: http
My understanding is that:
The app in my container is exposing itself to the outside world on 3000
my deployment yaml is saying, "the container is listening on 3000"
my service is saying map 3000 internally to port 80, which is the default port, so you don't have to add the port to the host.
I'm using the NodePort type because on local clusters like Docker Desktop it works out of the box instead of LoadBalancer. It opens up a random port on every node (pod?) to the outside in the cluster between 30000–32767. That node port is how I access my app from outside. E.g. localhost:30543.
Are my assumptions correct? I am unclear why I can't access my app at localhost:80, or just localhost, if the service makes the mapping between the container port and the outside world? What's the point of the mapping between 3000 and 80 in the service?
In short, why do I need NodePort?
There are two networking layers, which we could call "inside the cluster" and "outside the cluster". The Pod and the Service each have their own IP address, but these are only inside the cluster. You need the NodePort to forward a request from outside the cluster to inside the cluster.
In a "real" Kubernetes cluster, you'd make a request...
...to http://any-kubernetes-node.example.com:31245/, with a "normal" IP address in the way you'd expect a physical system to have, connecting to the NodePort port, which forwards...
...to http://web-service.default.svc.cluster.local:80/, with a cluster-internal IP address and the service port, which looks at the pods it selects and forwards...
...to http://10.20.30.40:3000/, using the cluster-internal IP address of any of the matching pods and the target port from the service.
The containerPort: in the pod spec isn't strictly required (but if you give it name: http then you can have the service specify targetPort: http without knowing the specific port number). EXPOSE in the Dockerfile means pretty much nothing in this sequence.
This sequence also gives you some flexibility in not needing to know where things are running. Say you have 100 nodes and 3 replicas of your pod; the initial connection can be to any node, and the service will forward to all of the target pods, without you needing to know any of these details from the caller.
(For completeness, a LoadBalancer type service requests that a load balancer be created outside the cluster; for example, an AWS ELB. This forwards to any of the cluster nodes as in step 1 above. If you're not in a cloud environment and the cluster doesn't know how to create the external load balancer automatically, it's the same as NodePort.)
If we reduce this to a local Kubernetes installation (Docker Desktop, minikube, kind) the only real difference is that there's only one node; the underlying infrastructure is still built as though it were a multi-node distributed cluster. How exactly you access a service differs across these installations. In Docker Desktop, from the host system, you can use localhost as the "normal" "external" node IP address in the first step.
I have built a K3s (https://k3s.io) cluster on a set of Raspberry Pi4 computers.
The controller (ctrl-1) node is a gateway in that it has 2 network interfaces. One is connected to my LAN and the other is connected to a network that it creates, e.g. K3S-LAN. The two nodes (node-1 and node-2) are deployed to the K3S-LAN.
I want to be able to access the applications running on the nodes through ctrl-1, e.g. from the LAN. This is because this cluster is meant to be portable so only the ctrl-1 node needs to be connected to the guest LAN. (Yes there are issues with DNS names etc to be sorted out, but I want to get the basics running first).
This means that I need to be able to "proxy" the ingress through ctrl-1. I thought I had the right idea for this in that I deployed "nginx-ingress" to the master, using Helm. However I forgot about the service for this - this has been scheduled on the nodes, whereas it needs to be on the controller so that the ports are opened up (I think). However I cannot find how to make the service run on the controller.
At the moment I have the service running with a type of NodePort. I could install MetalLB so that I have LoadBalancer capabilities. However with what I have seen I am not sure if this would help or not.
ctrl-1 does not have any taints setup on it, just the role of master.
Am I barking up the wrong tree here? I guess this might not be the intended use case of Kubernetes, but I am playing around with an idea. Thanks for any ideas that people have.
Update*
I have just thought that the way around this might be to run HAProxy on ctrl-1 (as another service on the host) and setup rules to proxy to the necessary services within the cluster. That would act as the bridge between the networks.
You just need to expose your pod via a Nodeport type service and it can be accessed via http://master-node-ip:nodeport. Make sure that kube-proxy is running on all master and worker nodes.
The ingress approach also should work as long as you have kube-proxy running on your master. You deploy nginx ingress on your cluster and it will get deployed into a worker node. Then you can expose nginx ingress controller itself using a NodePort service. After this you can create ingress resource for configuring the nginx ingress controller to route traffic to your backend pods and services running on worker nodes. The services for backend pods should be of type ClusterIP.
Deploy nginx ingress controller and expose it via NodePort service using kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.27.1/deploy/static/provider/baremetal/service-nodeport.yaml
Deploy nginx pod(nginx is an example..this should be your pod) kubectl run nginx --generator=run-pod/v1 --image=nginx
Expose nginx pod via ClusterIP service
apiVersion: v1
kind: Service
metadata:
labels:
run: nginx
name: nginx-service
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: nginx
Create ingress resource
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mycha-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: nginx-service
servicePort: 80
With above setup I can now access nginx and get "Welcome to nginx! " via http://master-node-ip:NodePort of nginx ingress controller
I've installed minikube to learn kubernetes a bit better.
I've deployed some apps and services which have ip's in a range of 10.x.x.x (private ip). I can expose my services on minikube and visit them in my browser. But I want to use the private IP's and not exposing it.
How can I visit (vpn/proxy wize) private ip's of services in minikube?
Minikube is Kubernetes with only one node and master server running on this node.
It provides the possibility to learn how it works with minimum hardware required.
It's ideal for testing purposes and seamless running on a laptop. Minikube is still software with mature
network stack from Kubernetes. This means that ports are exposed to services and virtually services are
communicating with pods.
To understand what is communicating, let me explain what ClusterIP does - it exposes the service on an internal IP in the cluster. This type makes service only reachable from within the cluster.
Cluster IP you can get by the command:
kubectl get services test_service
So, after you create a new service, you like to establish connections to ClusterAPI.
Basically, there are three ways to connect to backend resource:
1/ use kube-proxy - this proxy reflects services as defined in the Kubernetes API and simple stream TCP and UDP to backend or set of them in advanced configuration. Service cluster IPs and ports are currently found through Docker compatible environment variables specifying ports opened by the service proxy. There is an optional addon that provides cluster DNS for these cluster IPs. The user must create a service with the apiserver API to configure the proxy.
Example shows how can we use nodeselectors to define connection to port 5000 on ClusterIP - config.yaml may consist of:
kind: Service
apiVersion: v1
metadata:
name: jenkins-discovery
namespace: ci spec:
type: ClusterIP
selector:
app: master
ports:
- protocol: TCP
port: 50000
targetPort: 50000
name: slaves
2/ use port forwarding to access application - first check if kubectl command-line tool to communicate with your minikube cluster works, then if true find service port from ClusterIP configuration.
kubectl get svc | grep test_service
Let assume service test_service works on port 5555 so to do port forwarding run the command:
kubectl port-forward pods/test_service 5555:5555
After that, you service will be available on the localhost:5555
3/ If you are familiar with the concept of pods networking you cat declare public ports in the pod’s manifest file. A user can connect to pods network defining manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 8080
When the container is starting with manifest file like above host port TCP port 8080 will be forwarded to pod port 8080.
Please keep in the mind that ClusterIP is the use of a lot of services regarding to proper works of the cluster. I think it is not good practice to deal with ClusterIP as a regular network service - on worst scenario, it breaks a cluster soon, by invalid internal network state of connections.
It is so cool that we have a LoadBalancer in Docker for Mac.
I have a question regarding ports created:
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
run: nginx
spec:
ports:
port: 9999
targetPort: 80
selector:
run: nginx
type: LoadBalancer
This gives me(kubectl get service):
nginx LoadBalancer 10.96.128.253 localhost 9999:32455/TCP 2s
What is 32455?
Thanks
32455 is your nodePort. Kubernetes automatically assigns a unique nodePort for any service that is accessible outside of a cluster (including services of type LoadBalancer. You can specify these yourself as well in the same config, as long as you .
With regards to Docker for Mac specifically, Kubernetes is creating a service which is listening on localhost:9999. This is an "egress" that kubernetes is creating since you don't actually have a load balancer, it's essentially simulating one. Beyond the "load balancer/egress", it still behaves just like it would in production - that is Kubernes assigns a nodePort for the service. You curl localhost:32455, you will likely get the same response as if you had curl localhost:9999.