Secured communication over flannel in Kubernetes - kubernetes

Starting to experiment with Kubernetes (v1.3.2) I've created an on-premise cluster of 3 CentOS 7 VMs. As I understand, the internal communication in the cluster is by default using flannel overlay network.
Is it possible to secure all the intenal communication in the cluster by setting flannel to use TLS?

Related

Which port of consul is used as service discovery for percona cluster?

Using etcd container in swarm cluster as a service discovery is not working properly on Overlay network of swarm, but it seems consul is working like a charm in swarm mode, but is a single consul agent as a server is enough for introducing to percona nodes with environment variable such as DISCOVERY_SERVICE=etcd:2379 (that we used for etcd), what port should I use in consul instead of 2379, and also how to config consul?

Should Windows K8s nodes have aws-node & kube-proxy pods?

I have this mixed cluster which shows all the nodes as Ready (both Windows & Linux ones). However, only the Linux nodes have aws-node & kube-proxy pods. I RDPed into a Windows node and can see a kube-proxy service.
My question remains: do the Windows nodes need aws-node & kube-proxy pods in the kube-system namespace or do they work differently than Linux ones?
kube-proxy pods are part of default installation of Kubernetes. They are automaticaly created, and are needed on both Linux and Windows.
kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.
kube-proxy maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.
[source]
aws-node pod is a part of AWS CNI plugin for Kubernetes
The Amazon VPC Container Network Interface (CNI) plugin for Kubernetes is deployed with each of your Amazon EC2 nodes in a Daemonset with the name aws-node. The plugin consists of two primary components:
[...]
[source]
It is currently only supported on Linux. Windows nodes are using a different CNI plugin - vpc-shared-eni

Joining K3s agent node to K8s Master node

I was curious if we can use k8s and k3s together as it will help me in solving a complex architecture for edge computing.
I have a running k8s cluster prepared using kubeadm. For edge devices, I would like to have lightweight k3s running on them. Is it possible to use k8s cluster control plane to control k3s agent running on edge devices (embedded/routers etc)?
This kind of setup will open a lot of options for me as I'll have k8s functionality with k3s light footprint.
I tried using the default token of kubeadm k8s on k3s agent node (worker) but obviously it didn't join.
Kubeadm join internally performs TLS bootstrap of kubelet on worker nodes.You will have to TLS bootstrap the kubelet on k3s worker nodes to join the cluster created by kubeadm.

Is it possible to have kube-proxy without the kubernetes environment in vm pod using istio mesh expansion

I have been working on a very innovative project which involves both Kubernetes and Istio. So, I have 2-node kubernetes cluster setup with istio installed withe their side cars in the pods. I have already hosted the bookinfo application in the nodes but by using a separate VM by following the procedures given in Istio Mesh-Expansion.
So I have VM where the details and Mysqldb pods are present. The other pods are running in the k8s cluster. So Now, They communicate within a private network.
So my next phase of project would require me to setup Kube-proxy separately without installing Kubernetes in the VM, so as to allow it to directly communicate to the Kube-Api Server running in the master nodes of the k8s cluster through the private network. Hence, Can anybody suggest a way how to go about this?
All components of Kubernetes should be connected to the kube-api. Otherwise, they will not work.
So my next phase of project would require me to setup Kube-proxy separately without installing Kubernetes in the VM, so as to allow it to directly communicate to the Kube-Api Server running in the master nodes of the k8s cluster through the private network.
To access the kube-api server using Service with the private ClusterIP address, you should already have a kube-proxy. So, it is impossible to use any ClusterIP private address until you setup kube-proxy which is communicating with your kube-api by its address outside the Cluster IP range.
Kube-api can be exposed using NodePort or LoadBalancer type of service.

How to install kubernetes cluster on Rancher cluster?

Use Rancher server on several servers:
Master
Node1
Node2
Node3
Maybe only need rancher agent on node servers.
Also want to make kubernetes cluster on these servers. So install Kubernetes master on Rancher master, install Kubernetes nodes(kubelet) on Rancher nodes. Is it right?
So, the Kubernetes nodes can't install using Rancher server but should do it by self?
You will need a Rancher Agent on any server you want Rancher to place containers on. Rancher can deploy Kubernetes for you. I believe what you want to do is add all of the nodes, including the Rancher master, to a single Cattle environment(The Default env is Cattle). When adding the Rancher Server make sure you set CATTLE_AGENT_IP=. Once the hosts are registered, you will want to set host labels on the nodes. For nodes 1,2,3 you will set the label compute=true. On the Rancher Server you will set host 2 host labels, etcd=true and orchestration=true.
Once the labels are set up. Click on Catalog and search for Kubernetes. You can probably stick with most defaults, but CHANGE plane isolation to required.
Rancher should deploy Kubernetes management servers on the same host as your Rancher Server and the remaining nodes will be Kuberenetes minions.