Change kubeadm init ip address - kubernetes

How to change ip when I run kubeadm init? I create master node on google compute engine and want to connect node from aws and azure, but kubeadm use internal ip address which see only from google cloud platform network. I tried to use --apiserver-advertise-address=external ip, but in this case kubeadm stuck in [init] This might take a minute or longer if the control plane images have to be pulled. Firewall are open.

If I understand correctly what you are trying to do is using a GCP instance running kubeadm as the master and two nodes located on two other clouds.
What you need for this to work is to have a working load balancer with external IP pointing to your instance and forwarding the TCP packets back and forth.
First I created a static external IP address for my instance:
gcloud compute addresses create myexternalip --region us-east1
Then I created a a target pool for the LB and added the instance :
gcloud compute target-pools create kubernetes --region us-east1
gcloud compute target-pools add-instances kubernetes --instances kubeadm --instances-zone us-east1-b
Add a forwarding rule serving on behalf of an external IP and port range that points to your target pool. You'll have to do this for the ports the nodes need to contact your kubeadm instance on. Use the external IP created before.
gcloud compute forwarding-rules create kubernetes-forward --address myexternalip --region us-east1 --ports 22 --target-pool kubernetes
You can check now your forwarding rule which will look something like this:
gcloud compute forwarding-rules describe kubernetes-forward
IPAddress: 35.196.X.X
IPProtocol: TCP
creationTimestamp: '2018-02-23T03:25:49.810-08:00'
description: ''
id: 'XXXXX'
kind: compute#forwardingRule
loadBalancingScheme: EXTERNAL
name: kubernetes-forward
portRange: 80-80
region: https://www.googleapis.com/compute/v1/projects/XXXX/regions/us-east1
selfLink: https://www.googleapis.com/compute/v1/projects/XXXXX/regions/us-east1/forwardingRules/kubernetes-forward
target: https://www.googleapis.com/compute/v1/projects/XXXXX/regions/us-east1/targetPools/kubernetes
Now you can go with the usual process to install kubeadm and set up your cluster in your instance kubeadm init took around 50 seconds on mine.
Afterwards if you got the ports correctly opened in your firewall and forwarded to your master the nodes from AWS and Azure should be able to join.
Congratulations, now you have a multicloud kubernetes cluster! :)

Related

How to set DNS entrys & network configuration for kubernetes cluster #home (noob here)

I am currently running a Kubernetes cluster on my own homeserver (in proxmox ct's, was kinda difficult to get working because I am using zfs too, but it runs now), and the setup is as follows:
lb01: haproxy & keepalived
lb02: haproxy & keepalived
etcd01: etcd node 1
etcd02: etcd node 2
etcd03: etcd node 3
master-01: k3s in server mode with a taint for not accepting any jobs
master-02: same as above, just joining with the token from master-01
master-03: same as master-02
worker-01 - worker-03: k3s agents
If I understand it correctly k3s delivers with flannel as a CNI pre-installed, as well as traefik as a Ingress Controller.
I've setup rancher on my cluster as well as longhorn, the volumes are just zfs volumes mounted inside the agents tho, and as they aren't on different hdd's I've set the replicas to 1. I have a friend running the same setup (we set them up together, just yesterday) and we are planing on joining our networks trough vpn tunnels and then providing storage nodes for each other as an offsite backup.
So far I've hopefully got everything correct.
Now to my question: I've both got a static ip #home as well as a domain, and I've set that domain to my static ip
Something like that: (don't know how dns entries are actually written, just from the top of my head for your reference, the entries are working well.)
A example.com. [[my-ip]]
CNAME *.example.com. example.com
I've currently made a port-forward to one of my master nodes for port 80 & 443 but I am not quite sure how you would actually configure that with ha in mind, and my rancher is throwing a 503 after visiting global settings, but I have not changed anything.
So now my question: How would one actually configure the port-forward and, as far as I know k3s has a load-balancer pre-installed, but how would one configure those port-forwards for ha? the one master node it's pointing to could, theoretically, just stop working and then all services would not be reachable anymore from outside.
Assuming your apps are running on port 80 and port 443 your ingress should give you a service with an external ip and you would point your dns at that. Read below for more info.
Seems like you are not a noob! you got a lot going on with your cluster setup. What you are asking is a bit complicated to answer and I will have to make some assumptions about your setup, but will do my best to give you at least some intial info.
This tutorial has a ton of great info and may help you with what you are doing. They use kubeadm instead of k3s, buy you can skip that section if you want and still use k3s.
https://www.debontonline.com/p/kubernetes.html
If you are setting up and installing etcd on your own, you don't need to do that k3s will create an etcd cluster for you that run inside pods on your cluster.
Load Balancing your master nodes
haproxy + keepalived nodes would be configured to point to the ips of your master nodes at port 6443 (TCP), the keepalived will give you a virtual ip and you would configure your kubeconfig (that you get from k3s) to talk to that ip. On your router you will want to reserve an ip (make sure not to assign that to any computers).
This is a good video that explains how to do it with a nodejs server but concepts are the same for your master nodes:
https://www.youtube.com/watch?v=NizRDkTvxZo
Load Balancing your applications running in the cluster
Use an K8s Service read more about it here: https://kubernetes.io/docs/concepts/services-networking/service/
essentially you need an external ip, I prefer to do this with metal lb.
metal lb gives you a service of type load balancer with an external ip
add this flag to k3s when creating initial master node:
https://metallb.universe.tf/configuration/k3s/
configure metallb
https://metallb.universe.tf/configuration/#layer-2-configuration
You will want to reserve more ips on your router and put them under the addresses section in the yaml below. In this example you will see you have 11 ips in the range 192.168.1.240 to 192.168.1.250
create this as a file example metallb-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.1.240-192.168.1.250
kubectl apply -f metallb-cm.yaml
Install with these yaml files:
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml
source - https://metallb.universe.tf/installation/#installation-by-manifest
ingress
Will need a service of type load balancer, use its external ip as the external ip
kubectl get service -A - look for your ingress service and see if it has an external ip and does not say pending
I will do my best to answer any of your follow up questions. Good Luck!

How to assign VPC subnet(GCP) to kubernetes cluster?

I have VPC setup on Google Cloud which has 192.0.0.0/24 as subnet in which I am trying to setup k8s cluster for following servers ,
VM : VM NAME : Internal IP
VM 1 : k8s-master : 192.24.1.4
VM 2 : my-machine-1 : 192.24.1.1
VM 3 : my-machine-2 : 192.24.1.3
VM 4 : my-machine-3 : 192.24.1.2
Here k8s-master would act as a master and all other 3 machines would act as nodes. I am using following command to initilize my cluster.
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 ( need to change this to vpc subnet) --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=k8s-master-external-ip
I am using flannel for which I am using following command to setup network for my cluster,
sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Now whenever I am deploying a new pod, k8s is assigning IP from 10.244.0.0/16 to that pod which is not accessible from my eureka as my eureka server is running on Google Cloud VPC cidr.
I want to configure k8s such that it will use vpc subnet IP ( internal IP of the machine where pod is deployed ).
I even tried to manually download kube-flannel.yml and change cidr to my subnet but that did not solve my problem.
Need help to resolve this. Thanks in advance.
Kubernetes needs 3 subnets.
1 Subnet for the nodes (this would your vpc subnet 192.168.1.0/24)
1 Subnet for your pods.
Optionally 1 Subnet for Services.
These subnets cannot be the same they have to be different.
I believe what's missing in your case are routes to make the pods talk to each other. Have a look at this guide for setting up the routes you need

Redis Cluster Client doesn't work with Redis cluster on GKE

My setup has a K8S Redis cluster with 8 nodes and 32 pods across them and a load balancer service on top.
I am using a Redis cluster client to access this cluster using the load balancer's external IP. However, when handling queries, as part of Redis cluster redirection (MOVED / ASK), the cluster client receives internal IP addresses of the 32 Pods, connection to which fails within the client.
For example, I provide the IP address of the load balancer (35.245.51.198:6379) but the Redis cluster client throws errors like -
Caused by: redis.clients.jedis.exceptions.JedisConnectionException: Failed connecting to host 10.32.7.2:6379, which is an internal Pod IP.
Any ideas about how to deal with this situation will be much appreciated.
Thanks in advance.
If you're running on GKE, you can NAT the Pod IP using the IP masquerade agent:
Using IP masquerading in your clusters can increase their security by preventing individual Pod IP addresses from being exposed to traffic outside link-local range (169.254.0.0/16) and additional arbitrary IP ranges
Your issue specifically is that, the pod range is on 10.0.0.0/8, which is by default a non-masquerade CIDR.
You can change this using a ConfigMap to treat that range as masquerade so that it picks the node's external IP as source address.
Alternatively, you can change the pod range in your cluster to anything that is masked.
I have been struggling with the same problem in installing the bitnami/redis-cluster on gke.
In order to have the right networking settings you should create the cluster setting as a public cluster
The equivalent command line for creating the cluster in MYPROJECT is:
gcloud beta container --project "MYPROJECT" clusters create "redis-cluster" --zone "us-central1-c" --no-enable-basic-auth --cluster-version "1.21.5-gke.1802" --release-channel "regular" --machine-type "e2-medium" --image-type "COS_CONTAINERD" --disk-type "pd-standard" --disk-size "100" --metadata disable-legacy-endpoints=true --scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" --num-nodes "3" --logging=SYSTEM,WORKLOAD --monitoring=SYSTEM --no-enable-ip-alias --network "projects/MYPROJECT/global/networks/default" --subnetwork "projects/oddsjam/regions/us-central1/subnetworks/default" --no-enable-intra-node-visibility --no-enable-master-authorized-networks --addons HorizontalPodAutoscaling,HttpLoadBalancing,GcePersistentDiskCsiDriver --enable-autoupgrade --enable-autorepair --max-surge-upgrade 1 --max-unavailable-upgrade 0 --workload-pool "myproject.svc.id.goog" --enable-shielded-nodes --node-locations "us-central1-c"
Then you need to create as many External IP addresses in the Network VPC product. Those IP addresses will be picked by the Redis nodes automatically.
Then you are ready to get the values.yaml of the Bitnami Redis Cluster Helm chart and change the conf accordingly to your use case. Add the list of external ips you created to the cluster.externalAccess.loadBalancerIP value.
Finally, you can run the command to install a Redis cluster on GKE by running
helm install cluster-name -f values.yaml bitnami/redis-cluster
This command will give you the password of the cluster. you can use redis-client to connect to the new cluster with:
redis-cli -c -h EXTERNAL_IP -p 6379 -a PASSWORD

adding master to Kubernetes cluster: cluster doesn't have a stable controlPlaneEndpoint address

How can I add a second master to the control plane of an existing Kubernetes 1.14 cluster?
The available documentation apparently assumes that both masters (in stacked control plane and etcd nodes) are created at the same time. I have created my first master already a while ago with kubeadm init --pod-network-cidr=10.244.0.0/16, so I don't have a kubeadm-config.yaml as referred to by this documentation.
I have tried the following instead:
kubeadm join ... --token ... --discovery-token-ca-cert-hash ... \
--experimental-control-plane --certificate-key ...
The part kubeadm join ... --token ... --discovery-token-ca-cert-hash ... is what is suggested when running kubeadm token create --print-join-command on the first master; it normally serves for adding another worker. --experimental-control-plane is for adding another master instead. The key in --certificate-key ... is as suggested by running kubeadm init phase upload-certs --experimental-upload-certs on the first master.
I receive the following errors:
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver.
The recommended driver is "systemd". Please follow the guide at
https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
error execution phase preflight:
One or more conditions for hosting a new control plane instance is not satisfied.
unable to add a new control plane instance a cluster that doesn't have a stable
controlPlaneEndpoint address
Please ensure that:
* The cluster has a stable controlPlaneEndpoint address.
* The certificates that must be shared among control plane instances are provided.
What does it mean for my cluster not to have a stable controlPlaneEndpoint address? Could this be related to controlPlaneEndpoint in the output from kubectl -n kube-system get configmap kubeadm-config -o yaml currently being an empty string? How can I overcome this situation?
As per HA - Create load balancer for kube-apiserver:
In a cloud environment you should place your control plane nodes behind a TCP forwarding load balancer. This load balancer distributes
traffic to all healthy control plane nodes in its target list. The
health check for an apiserver is a TCP check on the port the
kube-apiserver listens on (default value :6443).
The load balancer must be able to communicate with all control plane nodes on the apiserver port. It must also allow incoming traffic
on its listening port.
Make sure the address of the load balancer
always matches the address of kubeadm’s ControlPlaneEndpoint.
To set ControlPlaneEndpoint config, you should use kubeadm with the --config flag. Take a look here for a config file example:
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: stable
controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT"
Kubeadm config files examples are scattered across many documentation sections. I recommend that you read the /apis/kubeadm/v1beta1 GoDoc, which have fully populated examples of YAML files used by multiple kubeadm configuration types.
If you are configuring a self-hosted control-plane, consider using the kubeadm alpha selfhosting feature:
[..] key components such as the API server, controller manager, and
scheduler run as DaemonSet pods configured via the Kubernetes API
instead of static pods configured in the kubelet via static files.
This PR (#59371) may clarify the differences of using a self-hosted config.
You need to copy the certificates ( etcd/api server/ca etc. ) from the existing master and place on the second master.
then run kubeadm init script. since the certs are already present the cert creation step is skipped and rest of the cluster initialization steps are resumed.

Accessing GCP Internal Load Balancer from another region

I need to access an internal application running on GKE Nginx Ingress service riding on Internal Load Balancer, from another GCP region.
I am fully aware that it is not possible using direct Google networking and it is a huge limitation (GCP Feature Request).
Internal Load Balancer can be accessed perfectly well via VPN tunnel from AWS, but I am not sure that creating such a tunnel between GCP regions under the same network is a good idea.
Workarounds are welcomed!
In the release notes from GCP, it is stated that:
Global access is an optional parameter for internal LoadBalancer Services that allows clients from any region in your VPC to access the internal TCP/UDP Load Balancer IP address.
Global access is enabled per-Service using the following annotation:
networking.gke.io/internal-load-balancer-allow-global-access: "true".
UPDATE: Below service works for GKE v1.16.x & newer versions:
apiVersion: v1
kind: Service
metadata:
name: ilb-global
annotations:
# Required to assign internal IP address
cloud.google.com/load-balancer-type: "Internal"
# Required to enable global access
networking.gke.io/internal-load-balancer-allow-global-access: "true"
labels:
app: hello
spec:
type: LoadBalancer
selector:
app: hello
ports:
- port: 80
targetPort: 8080
protocol: TCP
For GKE v1.15.x and older versions:
Accessing internal load balancer IP from a VM sitting in a different region will not work. But this helped me to make the internal load balancer global.
As we know internal load balancer is nothing but a forwarding rule, we can use gcloud command to enable global access.
Firstly get the internal IP address of the Load Balancer using kubectl and save its IP like below:
# COMMAND:
kubectl get services/ilb-global
# OUTPUT:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ilb-global LoadBalancer 10.0.12.12 10.123.4.5 80:32400/TCP 18m
Note the value of "EXTERNAL-IP" or simply run the below command to make it even simpler:
# COMMAND:
kubectl get service/ilb-global \
-o jsonpath='{.status.loadBalancer.ingress[].ip}'
# OUTPUT:
10.123.4.5
GCP gives a randomly generated ID to the forwarding rule created for this Load Balancer. If you have multiple forwarding rules, use the following command to figure out which one is the internal load balancer you just created:
# COMMAND:
gcloud compute forwarding-rules list | grep 10.123.4.5
# OUTPUT
NAME REGION IP_ADDRESS IP_PROTOCOL TARGET
a26cmodifiedb3f8252484ed9d0192 asia-south1 10.123.4.5 TCP asia-south1/backendServices/a26cmodified44904b3f8252484ed9d019
NOTE: If you not working on Linux or grep is not installed, simply run gcloud compute forwarding-rules list and manually look for the forwarding rule having the IP address we are looking for.
Note the name of the forwarding-rule and run the following command to update the forwarding rule with --allow-global-access (remember adding beta, as it is still a beta feature):
# COMMAND:
gcloud beta compute forwarding-rules update a26cmodified904b3f8252484ed9d0192 \
--region asia-south1 --allow-global-access
# OUTPUT:
Updated [https://www.googleapis.com/compute/beta/projects/PROJECT/regions/REGION/forwardingRules/a26hehemodifiedhehe490252484ed9d0192].
And it's done. Now you can access this internal IP (10.123.4.5) from any instance in any region (but the same VPC network).
Another possible way is to implement the ngnix reverser proxy server on an compute engine in the same region as of GKE cluster, and use the internal IP of compute engine instance to communicate with the services of the GKE.
First of all, note that the only way to connect any GCP resource (in this case your GKE cluster) from an on premise location, it’s either through a Cloud Interconnect or VPN set up, which actually they must be in the same region and VPC to be able to communicate with each other.
Having said that, I see you won’t like to do that under the same VPC, therefore a workaround for your scenario could be:
Creating a Service of type LoadBalancer, so your cluster can be reachable through and external (public) IP by exposing this service. If you are worried about the security, you can use Istio to enforce access policies for example.
Or, to create an HTTP(S) load balancing with Ingress, so your cluster can be reachable through its external (public) IP. Where again, for security purposes you can use GCP Cloud Armor which actually so far works only for HTTP(S) Load Balancing.