Kubernetes test environment - kubernetes

I have a hosted VPS and I would like to use this machine as a single node Kubernetes test environment. Is it possible to create a single-node Kubernetes cluster on the VPS, deploy pods to it using, for example, GitLab and test the application from outside the machine? I would like to locally develop, push to git and then deploy on this testing/staging environment.
Thank you

Answering the part of whole question:
I have a hosted VPS and I would like to use this machine as a single node kubernetes test environment.
A good starting point could be to cite the parts of my own answer from Serverfault:
There are a lot of options to choose from. Each solution will have it's advantages and disadvantages. It will also depend on the operating system your VM is deployed with.
Some of the options are the following:
MicroK8S - as pointed by user #Sekru
Minikube
Kind
Kubeadm
Kubespray
Kelsey Hightower: Kubernetes the hard way
Each of the solutions linked above have a link to it's respective homepage. You can find there installation steps/tips. Each solution is different and I encourage you to check if selected option suits your needs.
You'll need to review the networking part of each of above solutions as some of them will have easier/more difficult process to expose your workload outside of the environment (make it accessible from the Internet).
It all boils down to what are your requirements/expectations and what are the requirements for each of the solutions.
MicroK8S setup:
I do agree with an answer provided by community member #Sekru but I also think it could be beneficiary to add an example for such setup. Assuming that you have a microk8s compatible OS:
sudo snap install microk8s --classic
sudo microk8s enable ingress
sudo microk8s kubectl create deployment nginx --image=nginx
sudo microk8s kubectl expose deployment nginx --port=80 --type=NodePort
sudo microk8s kubectl apply -f ingress.yaml where ingress.yaml is a file with following content:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
spec:
ingressClassName: public # <-- IMPORTANT
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx
port:
number: 80
After above steps you should be able to contact your Deployment from the place outside of your host by:
curl http://IP-ADDRESS
Side notes!
A setup like that will allow expose your workload on a NodePort (allocated port on each node from 30000 to 32767).
From the security perspective I would consider using your VPS provider firewalls to limit the traffic coming to your instance to allow only subnets that you are connecting from.
From the perspective of Gitlab integration with Kubernetes, I'd reckon you could find useful information by following it's page:
About.gitlab.com
Additional resources about Kubernetes:
Kubernetes.io: Docs: Home

Yes, use microk8s. For access from outside, use ingress addon

Related

How to set DNS entrys & network configuration for kubernetes cluster #home (noob here)

I am currently running a Kubernetes cluster on my own homeserver (in proxmox ct's, was kinda difficult to get working because I am using zfs too, but it runs now), and the setup is as follows:
lb01: haproxy & keepalived
lb02: haproxy & keepalived
etcd01: etcd node 1
etcd02: etcd node 2
etcd03: etcd node 3
master-01: k3s in server mode with a taint for not accepting any jobs
master-02: same as above, just joining with the token from master-01
master-03: same as master-02
worker-01 - worker-03: k3s agents
If I understand it correctly k3s delivers with flannel as a CNI pre-installed, as well as traefik as a Ingress Controller.
I've setup rancher on my cluster as well as longhorn, the volumes are just zfs volumes mounted inside the agents tho, and as they aren't on different hdd's I've set the replicas to 1. I have a friend running the same setup (we set them up together, just yesterday) and we are planing on joining our networks trough vpn tunnels and then providing storage nodes for each other as an offsite backup.
So far I've hopefully got everything correct.
Now to my question: I've both got a static ip #home as well as a domain, and I've set that domain to my static ip
Something like that: (don't know how dns entries are actually written, just from the top of my head for your reference, the entries are working well.)
A example.com. [[my-ip]]
CNAME *.example.com. example.com
I've currently made a port-forward to one of my master nodes for port 80 & 443 but I am not quite sure how you would actually configure that with ha in mind, and my rancher is throwing a 503 after visiting global settings, but I have not changed anything.
So now my question: How would one actually configure the port-forward and, as far as I know k3s has a load-balancer pre-installed, but how would one configure those port-forwards for ha? the one master node it's pointing to could, theoretically, just stop working and then all services would not be reachable anymore from outside.
Assuming your apps are running on port 80 and port 443 your ingress should give you a service with an external ip and you would point your dns at that. Read below for more info.
Seems like you are not a noob! you got a lot going on with your cluster setup. What you are asking is a bit complicated to answer and I will have to make some assumptions about your setup, but will do my best to give you at least some intial info.
This tutorial has a ton of great info and may help you with what you are doing. They use kubeadm instead of k3s, buy you can skip that section if you want and still use k3s.
https://www.debontonline.com/p/kubernetes.html
If you are setting up and installing etcd on your own, you don't need to do that k3s will create an etcd cluster for you that run inside pods on your cluster.
Load Balancing your master nodes
haproxy + keepalived nodes would be configured to point to the ips of your master nodes at port 6443 (TCP), the keepalived will give you a virtual ip and you would configure your kubeconfig (that you get from k3s) to talk to that ip. On your router you will want to reserve an ip (make sure not to assign that to any computers).
This is a good video that explains how to do it with a nodejs server but concepts are the same for your master nodes:
https://www.youtube.com/watch?v=NizRDkTvxZo
Load Balancing your applications running in the cluster
Use an K8s Service read more about it here: https://kubernetes.io/docs/concepts/services-networking/service/
essentially you need an external ip, I prefer to do this with metal lb.
metal lb gives you a service of type load balancer with an external ip
add this flag to k3s when creating initial master node:
https://metallb.universe.tf/configuration/k3s/
configure metallb
https://metallb.universe.tf/configuration/#layer-2-configuration
You will want to reserve more ips on your router and put them under the addresses section in the yaml below. In this example you will see you have 11 ips in the range 192.168.1.240 to 192.168.1.250
create this as a file example metallb-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.1.240-192.168.1.250
kubectl apply -f metallb-cm.yaml
Install with these yaml files:
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml
source - https://metallb.universe.tf/installation/#installation-by-manifest
ingress
Will need a service of type load balancer, use its external ip as the external ip
kubectl get service -A - look for your ingress service and see if it has an external ip and does not say pending
I will do my best to answer any of your follow up questions. Good Luck!

access postgres in kubernetes from an application outside the cluster

Am trying to access postgres db deployed in kubernetes(kubeadm) on centos vms from another application running on another centos vm. I have deployed postgres service as 'NodePort' type. My understanding is we can deploy it as LoadBalancer type only on cloud providers like AWS/Azure and not on baremetal vm. So now am trying to configure 'ingress' with NodePort type service. But am still unable to access my db other than using kubectl exec $Pod-Name on kubernetes master.
My ingress.yaml is
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: postgres-ingress
spec:
backend:
serviceName: postgres
servicePort: 5432
which does not show up any address as below
NAME HOSTS ADDRESS PORTS AGE
postgres-ingress * 80 4m19s
am not even able to access it from pgadmin on my local mac. Am I missing something?
Any help is highly appreciated.
Ingress won't work, it's only designed for HTTP traffic, and the Postgres protocol is not HTTP. You want solutions that deal with just raw TCP traffic:
A NodePort service alone should be enough. It's probably the simplest solution. Find out the port by doing kubectl describe on the service, and then connect your Postgres client to the IP of the node VM (not the pod or service) on that port.
You can use port-forwarding: kubectl port-forward pod/your-postgres-pod 5432:5432, and then connect your Postgres client to localhost:5432. This is my preferred way for accessing the database from your local machine (it's very handy and secure) but I wouldn't use it for production workloads (kubectl must be always running so it's somewhat fragile and you don't get the best performance).
If you do special networking configuration, it is possible to directly access the service or pod IPs from outside the cluster. You have to route traffic for the pod and service CIDR ranges to the k8s nodes, this will probably involve configuring your VM hypervisors, routers and firewalls, and is highly dependent on what networking (CNI) plugin are you using for your Kubernetes cluster.

Kubernetes NodePort / Load Balancer / Ingress on a Multi-Master Setup: Is it necessary?

I'm fairly new to this but I'm setting up a multi-master, high availability Kubernetes cluster of at least 3 masters and a variable number of nodes. I'm trying to do this WITHOUT the use of kube-spray or any other tools, in order to learn the true ins-and-outs. I feel I have most of it down except one bit:
My understanding is:
A NodePort allocates a port to a specific service
A Load Balancer is an external resource that allows for external access
An Ingress Controller allows you to configure specific paths to services and ports.
Some points about my cluster:
The pods I deploy can run on any machine in the cluster and don't need to be externally accessible.
My masters are also worker nodes and can run pods
etcd runs on each master
My question is, do I need a NodePort/LB/Ingress Controller? I'm trying to understand why I would need any of the above. If a master is joined to an existing cluster alongside another master, the pods are distributed between them, right? Isn't that all I need? Please help me to understand as I feel I'm missing a key concept.
First of all NodePort, LoadBalancer and Ingress has nothing to do with the setting up kubernetes cluster. These three are tools to expose your apps to the outside world so that you can access those apps from outside the kubernetes cluster.
There are two parts here:
Setting up the highly available kubernetes cluster with three masters. I have written a blog on it, how to setup a multi-master kubernetes cluster, it will give you brief idea about how to setup multi master cluster in kubernetes.
https://velotio.com/blog/2018/6/15/kubernetes-high-availability-kubeadm
Now once you have your kubernetes cluster ready, you can start deploying your applications on it (pods,services etc.). Those application you deploy might needs to be exposed to outside world, for example a website hosted on your kubernetes cluster and needs to be accessed from internet. Then these NodePort, Loadbalancer or Ingress comes into the picture. The difference between NodePort, LoadBalancer and Ingress and when to use what? is explained very well in this article here.
https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0
Hope this gives you some clarity.
EDIT: This edit is for kubeadm config file for 1.13(see comments)
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: stable
apiServer:
certSANs:
- "VIRTUAL IP"
controlPlaneEndpoint: "VIRTUAL IP"
etcd:
external:
endpoints:
- https://ETCD_0_IP:2379
- https://ETCD_1_IP:2379
- https://ETCD_2_IP:2379
caFile: /etc/kubernetes/pki/etcd/ca.crt
certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key

Minikube networking

I have a Linux build machine that I have installed minikube too. Within the minikube instance I have installed artifactory which I will be using for storing various build artifacts
I now want to be able to do some work on my dev machine (which is an unrelated laptop on the same network as the Linux build machine) and push some built artifacts into artifactory.
However I can't figure out how to get to artifactory. When I ssh to the Linux server and check the minikube service I can see that the artifactory instance is running on a 192.168 address.
Is there any way to expose artifactory ie access it on the windows machine? Or is this not possible and I should just install artifactory on the Linux machine rather than in minikube?
Expose you artifactory Service
$ minikube service <artifactory-service> -n <namespace>
Or get the URL
$ minikube service <artifactory-service> -n <namespace> --url
If you want to access from remote, you need to do something else.
Suppose, when you run minikube service <artifactory-service> -n <namespace> --url, you get following
http://192.168.99.100:30654
You can access artifactory in minikube using this URL. But can't access from remote.
Now do this, expose port 30654
ssh -i ~/.minikube/machines/minikube/id_rsa docker#$(minikube ip) -L \*:30654:0.0.0.0:30654
You will be able to access from other network.
Yes, we need an ingress controller (like nginx) to expose a kubernetes service for external access.
There are three ways to create the nginx ingress service using kubernetes per https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services---service-types and expose it for external access:
LoadBalancer service type which sets the ExternalIP automatically. This is used when there is an external non-k8s, cloud-provider's load-balancer like CGE, AWS or Azure, and this external load-balancer would provide the ExternalIP for the nginx ingress service.
ExternalIPs per https://kubernetes.io/docs/concepts/services-networking/service/#external-ips.
NodePort. In this approach, the service can be accessed from outside the cluster using NodeIP:NodePort/url/of/the/service.
Along with the nginx ingress controller, you'll need an ingress resource too. Refer https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/complete-example for examples.
Keep in mind that Minikube is a small VM with a small docker registry by default. So, it may not be possible to store a lot of build artifacts in Minikube.
To get this to work in the end I setup ingress on minikube and then through entries in hosts file and nginx as a reverse proxy managed to get things working.

gcloud container engine - Network Load Balancing cost

I have a kubernetes setup running in google container engine. one of the k8s Service "type: LoadBalancer"... so i guess it created a Google Network Load Balancing. Now part of my billing
"Compute Engine Network Load Balancing" is way higher than my compute engine cost. Is there a way to eliminate "Network Load Balancing" cost item with any other solution in kubernates...please advise.
This question is close to what I'm looking for:
GCP Kube-Lego forwarding rule pricing
...but no answers so far.
1) Deploy nginx-ingress-controller to kube-cluster:
helm install --name my-lb stable/nginx-ingress --set controller.service.type=NodePort
helm list
kubectl get svc
This will create "my-lb-nginx-ingress-controller" - a custom nginx load balancer instead of gke-load-balancer(google's). This will implement ingress rule objects in the kube-cluster.
*** After this, any ingress rule object created with "annotations: kubernetes.io/ingress.class: nginx", will be enforced by this ngnix-controller.
2) Create firewall rule to open nodePorts:
Since nginx-controller deployed as "conroller.service.type=NodePort", check the nodePorts from "kubect get svc" command and create gcloud "networking/firewall" rule to allow ports "tcp:31181;tcp:31462". Now you can use browser to reach "http://node-ip-address:31181" or "https://node-ip-address:31462" reach ngnix controllers..
3) Delete stuff:
helm delete my-lb
helm del --purge my-lb
I did above in gke, and now i have ngnix-load-balancer instead of google's cloud-load-balancer. But one limitation i experience is "http://node-ip:80" gets connection refused...don't know why is this. But, access through nodeport "http://node-ip-address:31181" is working. Ok for now, have to figure out port 80 access denial.