Expose kube-apiserver with additional IP address - kubernetes

I setup a k8s cluster using kubeadm init on a bare metal cluster.
I noticed the kube-apiserver is exposing its interface on a private IP:
# kubectl get pods kube-apiserver-cluster1 -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-apiserver-cluster1 1/1 Running 0 6d22h 10.11.1.99 cluster1 <none> <none>
Here's the kube config inside the cluster:
# kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://10.11.1.99:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
This is fine for using kubectl locally on the cluster, but I want to add an additional interface to expose the kube-apiserver using the public IP address. Ultimately I'm trying to configure kubectl from a laptop to remotely access the cluster.
How can I expose the kube-apiserver on an external IP address?

Execute following command:
$ kubeadm init --pod-network-cidr=<ip-range> --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=<PRIVATE_IP>[,<PUBLIC_IP>,...]
Don't forget to replace the private IP for the public IP in your .kube/config if you use kubectl from remote.
You can also forward the private IP of the master node to the public IP of the master node on the worker node. Run this command on worker node before running kubeadm join:
$ sudo iptables -t nat -A OUTPUT -d <Private IP of master node> -j DNAT --to-destination <Public IP of master node>.
But keep in mind that you'll also have to forward worker private IPs the same way on the master node to make everything work correctly (if they suffer from the same issue of being covered by cloud provider NAT).
See more: apiserver-ip, kube-apiserver.

Related

Kubectl commands cannot be executed from another VM

I'm having an issue when executing the "kubectl" commands. In fact, my cluster consists of one Master and one Worker node. The kubectl commands can be executed from the Master server without having an issue. But, I also have another VM which I use that VM as a Jump server to login to the master and worker nodes. I need to execute the kubectl commands from that Jump server. I created the .kube directory, and copied the kubeconfig file from the Master node to the Jump server. And also I set the context correctly as well. But the kubectl commands hangs when executing from the Jump server and it gives a timeout error.
Below are the information.
kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://10.240.0.30:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
ubuntu#ansible:~$ kubectl config use-context kubernetes-admin#kubernetes
Switched to context "kubernetes-admin#kubernetes".
ubuntu#ansible:~$ kubectl get pods
Unable to connect to the server: dial tcp 10.240.0.30:6443: i/o timeout
ubuntu#ansible:~$ kubectl config current-context
kubernetes-admin#kubernetes
Everything seems to be OK for me and wondering why kubectl commands hang when wxecuting from the Jump server.
Troubleshooted the issue by verifying whether the Jump VM can telnet to Kubernetes Master Node by executing the below.
telnet <ip-address-of-the-kubernetes-master-node> 6443
Since the error was a "Connection Timed Out" I had to add a firewall rule to Kubernetes Master node. Added a firewall rule as below. Note: In my case I'm using GCP.
gcloud compute firewall-rules create allow-kubernetes-apiserver \
--allow tcp:22,tcp:6443,icmp \
--network kubernetes \
--source-ranges 0.0.0.0/0
Then I was able to telnet to the Master Node without any issue. If still you can't get connected to the Master node, change the Internal IP in the kubconfig file under .kube directory to the Public IP address of the Master node.
Then change the context using below command.
kubectl config set-context <context-name>

Kubernetes access to port on node from inside pod?

I am trying to access a service listening on a port running on every node in my bare metal (Ubuntu 20.04) cluster from inside a pod. I can use the real IP address of one of the nodes and it works. However I need pods to connect to the port on their own node. I cant use '127.0.0.1' inside a pod.
More info: I am trying to wrangle a bunch of existing services into k8s. We use an old version of Consul for service discovery and have it running on every node providing DNS on 8600. I figured out how to edit the coredns Corefile to add a consul { } block so lookups for .consul work.
consul {
errors
cache 30
forward . 157.90.123.123:8600
}
However I need to replace that IP address with the "address of the node the coredns pod is running on".
Any ideas? Or other ways to solve this problem? Tx.
Comment from #mdaniel worked. Tx.
Edit coredns deployment. Add this to the container after volumeMounts:
env:
- name: K8S_NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
Edit coredns config map. Add to bottom of the Corefile:
consul {
errors
cache 30
forward . {$K8S_NODE_IP}:8600
}
Check that DNS is working
kubectl run tmp-shell --rm -i --tty --image nicolaka/netshoot -- /bin/bash
nslookup myservice.service.consul
nslookup www.google.com
exit

How can a Kubernetes pod connect to database which is running in the same local network (outside the cluster) as the host?

I have a Kubernetes cluster (K8s) running in a physical server A (internal network IP 192.168.200.10) and a PostgreSQL database running in another physical server B (internal network IP 192.168.200.20). How can my Java app container (pod) running in the K8s be able to connect to the PostgreSQL DB in server B?
OS: Ubuntu v16.04
Docker 18.09.7
Kubernetes v1.15.4
Calico v3.8.2
Pod base image: openjdk:8-jre-alpine
I have tried following this example to create a service and endpoint
kind: Service
apiVersion: v1
metadata:
name: external-postgres
spec:
ports:
- port: 5432
targetPort: 5432
---
kind: Endpoints
apiVersion: v1
metadata:
name: external-postgres
subsets:
- addresses:
- ip: 192.168.200.20
ports:
- port: 5432
And had my JDBC connection string as: jdbc:postgresql://external-postgres/MY_APPDB , but it doesn't work. The pod cannot ping server B or telnet the DB using the said internal IP or ping external-postgres service name. I do not wish to use "hostNetwork: true" or connect server B via a public IP.
Any advice is much appreciated. Thanks.
I just found out the issue is due to the K8s network conflict with the server local network (192.168.200.x)
subnet.
During the K8s cluster initialization
kubadmin init --pod-network-cidr=192.168.0.0/16
The CIDR 192.168.0.0/16 IP range must be change to something else eg. 10.123.0.0/16
And this IP range must be also changed in the calico.yaml file before applying the Calico plugin:
# The default IPv4 pool to create on startup if none exists. Pod IPs will be
# chosen from this range. Changing this value after installation will have
# no effect. This should fall within `--cluster-cidr`.
- name: CALICO_IPV4POOL_CIDR
value: "10.123.0.0/16"
Can now ping and telnet server B after reset and re-init the K8s cluster with the different CIDR.
I guess you can replace CALICO_IPV4POOL_CIDR without re-spawning K8s cluster via kubeadm builder tool, maybe it can be useful in some circumstances.
Remove current Calico CNI plugin installation, eg.:
$ kubectl delete -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml
Install Calico CNI addon, supplying CALICO_IPV4POOL_CIDR parameter with a desired value:
$ curl -k https://docs.projectcalico.org/v3.8/manifests/calico.yaml --output some_file.yaml && sed -i "s~$old_ip~$new_ip~" some_file.yaml && kubectl apply -f some_file.yaml
Re-spin CoreDNS pods:
$ kubectl delete pod --selector=k8s-app=kube-dns -n kube-system
Wait until CoreDNS pods obtain IP address from a new network CIDR pool.

Helm list tries to connect to localhost rather than Kubernetes

I have a Kubernetes cluster running. All pods are running. This is a windows machine with minikube on it.
However helm ls --debug gives following error
helm ls --debug
[debug] Created tunnel using local port: '57209'
[debug] SERVER: "127.0.0.1:57209"
Error: Get http://localhost:8080/api/v1/namespaces/kube-system/configmaps?labelSelector=OWNER%!D(MISSING)TILLER: dial tcp 127.0.0.1:8080: connect: connection refused
Cluster information
kubectl.exe cluster-info
Kubernetes master is running at https://135.250.128.98:8443
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
kubectl service
kubectl.exe get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h
Dashboard is accessible at http://135.250.128.98:30000
kube configuration:
apiVersion: v1
clusters:
- cluster:
certificate-authority: C:\Users\abc\.minikube\ca.crt
server: https://135.250.128.98:8443
name: minikube
contexts:
- context:
cluster: minikube
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
as-user-extra: {}
client-certificate: C:\Users\abc\.minikube\client.crt
client-key: C:\Users\abc\.minikube\client.key
Is there a solution? Most online resource says cluster is misconfigured. But not sure what is misconfigured and how to solve this error?
What worked for me when I was facing the same issue was changing automountServiceAccountToken to true.
Use the following command to edit the tiller-deploy
kubectl --namespace=kube-system edit deployment/tiller-deploy
And change automountServiceAccountToken to true
I've faced that problem and found an explanation on GitHub.
In this case, the preferable method to make it work is to rebuild the docker container with missing environment variable. These lines should build a new image:
cat << eof > Dockerfile
FROM gcr.io/kubernetes-helm/tiller:v2.3.1
ENV KUBERNETES_MASTER XX.XX.XX.XX:8080
eof
docker build -t tiller:latest .
Please substitute XX.XX.XX.XX with your Kubernetes Master IP address.

Access Kubertnetes Cluster From a Remote Host present is a Different Network

I have deployed a 2 node 1 master k8s cluster on google cloud with the help of kubeadm.
root#ubuntu-vm-1404:~/ansible/Kubernetes# kubectl --kubeconfig kubernetes.conf get nodes
NAME STATUS ROLES AGE VERSION
kubernetes-node1 Ready 1h v1.9.3
kubernetes-node2 Ready 1h v1.9.3
master-kubernetes Ready master 1h v1.9.3
[sujeetkp#master-kubernetes ~]$ kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T12:22:21Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was
Can somebody please help me how can I access the cluster from my local machine or from a remote host present in a different network.
root#ubuntu-vm-1404:~/ansible/Kubernetes# kubectl --kubeconfig kubernetes.conf config view
apiVersion: v1
clusters:
cluster:
certificate-authority-data: REDACTED
server: https://10.142.0.3:6443
name: kubernetes
contexts:
context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
In the config file Private IP is mentioned as "server: https://10.142.0.3:6443". So I doubt I can access it from a different network.
I have followed the below Document.
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
The commands that I have executed are
kubeadm init --pod-network-cidr=10.244.0.0/16
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
kubeadm join --token b9cd48.c4b0d860b9b530f7 10.142.0.3:6443 --discovery-token-ca-cert-hash sha256:5c15e951dcca92f5877cd2dab8a4383accadedc37233b68d8c33451768dc03e3
You need kubectl installed in your remote host and copy the conf file as mentioned in that document. Use kubectl proxy for port forwarding from cluster to the instance where kubernetes is running. Once done make sure to modify the clusterIp in conf as the public Ip. The same document as you refer has the details