Access kubernetes cluster outside of VPN - kubernetes

I configured a kubernetes cluster with rke on premise (for now single node - control plane, worker and etcd).
The VM which I launched the cluster on is inside a VPN.
After succesfully initializng the cluster, I managed to access the cluster with kubectl from inside the VPN.
I tried to access the cluster outside of the VPN so I updated the kubeconfig file and changed the followings:
server: https://<the VM IP> to be server: https://<the external IP>.
I also exposed port 6443.
When trying to access the cluster I get the following error:
E0912 16:23:39 proxy_server.go:147] Error while proxying request: x509: certificate is valid for <the VM IP>, 127.0.0.1, 10.43.0.1, not <the external IP>
My question is, how can I add the external IP to the certificate so I will be able to access the cluster with kubectl outside of the VPN.
The rke configuration yml.
# config.yml
nodes:
- address: <VM IP>
hostname_override: control-plane-telemesser
role: [controlplane, etcd, worker]
user: motti
ssh_key_path: /home/<USR>/.ssh/id_rsa
ssh_key_path: /home/<USR>/.ssh/id_rsa
cluster_name: my-cluster
ignore_docker_version: false
kubernetes_version:
services:
etcd:
backup_config:
interval_hours: 12
retention: 6
snapshot: true
creation: 6h
retention: 24h
kube-api:
service_cluster_ip_range: 10.43.0.0/16
service_node_port_range: 30000-32767
pod_security_policy: false
kube-controller:
cluster_cidr: 10.42.0.0/16
service_cluster_ip_range: 10.43.0.0/16
kubelet:
cluster_domain: cluster.local
cluster_dns_server: 10.43.0.10
fail_swap_on: false
extra_args:
max-pods: 110
network:
plugin: flannel
options:
flannel_backend_type: vxlan
dns:
provider: coredns
authentication:
strategy: x509
authorization:
mode: rbac
ingress:
provider: nginx
options:
use-forwarded-headers: "true"
monitoring:
provider: metrics-server
Thanks,

So I found the solution for RKE cluster configuration.
You to add sans to the the cluster.yml file at the authentication section:
authentication:
strategy: x509
sans:
- "10.18.160.10"
After you saved the file just run rke up again and it will update the cluster.

Related

connect to kubernetes cluster from local machine using kubectl

I have installed a kubernetes cluster on EC2 instances on AWS.
1 master node and 2 worker nodes.
Everything works fine when I connect to the master node and issue commands using kubectl.
But I want to be able to issue kubectl commands from my local machine.
So I copied the contents of .kube/config file from master node to my local machine's .kube/config.
I have only changed the ip address of the server because the original file references to an internal ip. The file looks like this now :
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1URXhNVEUyTXpneE5Gb1hEVE14TVRFd09U4M0xTCkJ1THZGK1VMdHExOHovNG0yZkFEMlh4dmV3emx0cEovOUlFbQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://35.166.48.257:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJVENDQWdtZ0F3SUJBZ0lJYkhZQStwL3UvM013RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TVRFeE1URXhOak00TVRSYUZ3MHlNakV4TVRFeE5qTTRNVGRhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVCQVFzRkFBT0NBUUVBdjJlVTBzU1cwNDdqUlZKTUQvYm1WK1VwWnRBbU1NVDJpMERNCjhCZjhDSm1WajQ4QlpMVmg4Ly82dnJNQUp6YnE5cStPa3dBSE1iWVQ4TTNHK09RUEdFcHd3SWRDdDBhSHdaRVQKL0hlVnI2eWJtT2VNeWZGNTJ1M3RIS3MxU1I1STM5WkJPMmVSU2lDeXRCVSsyZUlCVFkrbWZDb3JCRWRnTzJBMwpYQVVWVlJxRHVrejZ6OTAyZlJkd29yeWJLaU5mejVWYXdiM3VyQUxKMVBrOFpMNE53QU5vejBEL05HekpNT2ZUCjJGanlPeXcrRWFFMW96UFlRTnVaNFBuM1FWdlFMVTQycU5adGM0MmNKbUszdlBVWHc1LzBYbkQ4anNocHpNbnYKaFZPb2Y2ZGp6YzZMRGNzc1hxVGRYZVdIdURKMUJlcUZDbDliaDhQa1NQNzRMTnE3NGc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBeVY1TGdGMjFvbVBBWGh2eHlzKzJIUi8xQXpLNThSMkRUUHdYYXZmSjduS1hKczh5CjBETkY5RTFLVmIvM0dwUDROcC84WEltRHFpUHVoN2J1YytYNkp1T0J0bGpwM0w1ZEFjWGxPaTRycWJMR1FBdzUKdG90UU94OHoyMHRLckFTbElUdUFwK3ZVMVR0M25hZ0xoK2JqdHVzV0wrVnBSdDI0d0JYbm93eU10ZW5HRUdLagpKRXJFSmxDc1pKeTRlZWdXVTZ3eDBHUm1TaElsaE9JRE9yenRValVMWVVVNUJJODBEMDVSSzBjeWRtUjVYTFJ1CldIS0kxZ3hZRnBPTlh4VVlOVWMvVU1YbjM0UVdJeE9GTTJtSWd4cG1jS09vY3hUSjhYWWRLV2tndDZoN21rbGkKejhwYjV1VUZtNURJczljdEU3cFhiUVNESlQzeXpFWGFvTzJQa1FJREFRQUJBb0lCQUhhZ1pqb28UZCMGNoaUFLYnh1RWNLWEEvYndzR3RqU0J5MFNFCmtyQ2FlU1BBV0hBVUZIWlZIRWtWb1FLQmdERllwTTJ2QktIUFczRk85bDQ2ZEIzUE1IMHNMSEdCMmN2Y3JZbFMKUFY3bVRhc2Y0UEhxazB3azlDYllITzd0UVg0dlpBVXBVZWZINDhvc1dJSjZxWHorcTEweXA4cDNSTGptaThHSQoyUE9rQmQ0U05IY0habXRUcExEYzhsWG13aXl2Z1RNakNrU0tWd3l5UDVkUlNZZGVWbUdFSDl1OXJZVWtUTkpwCjRzQUJBb0dCQUpJZjA4TWl2d3h2Z05BQThxalllYTQzTUxUVnJuL3l0ck9LU0RqSXRkdm9QYnYrWXFQTnArOUUKdUZONDlHRENtc0UvQUJwclRpS2hyZ0I4aGI4SkM5d3A3RmdCQ25IU0tiOVVpVG1KSDZQcDVYRkNKMlJFODNVNQp0NDBieFE0NXY3VzlHRi94MWFpaW9nVUlNcTkxS21Vb1RUbjZhZHVkMWM5bk5yZmt3cXp3Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
~
When I try to use a kubectl command from my local machine I get this error :
Unable to connect to the server: x509: certificate is valid for 10.96.0.1, 172.31.4.108, not 35.166.48.257
This is bcs the kube-api server TLS cert is only valid for 10.96.0.1, 172.31.4.108 and not for 35.166.48.257. There are several options, like to tell kubectl the skip TLS verfiy but i would not re-commend that. The best would be to re-generate the whole PKI on your Cluster.
Both ways are described here
Next time for a kubeadm Cluster you can use --apiserver-cert-extra-sans=EXTERNAL_IP at the cluster init to also add the external IP to the API Server TLS cert.

cetic-nifi Invalid host header issue

Helm version: v3.5.2
Kubernetes version: v1.20.4
nifi chart version:latest : 1.0.2 rel
Issue: [cetic/nifi]-issue
I'm trying to connect to nifi UI deployed in kubernetes.
I have set following properties in values yaml
properties:
# use externalSecure for when inbound SSL is provided by nginx-ingress or other external mechanism
sensitiveKey: changeMechangeMe # Must to have minimal 12 length key
algorithm: NIFI_PBKDF2_AES_GCM_256
externalSecure: false
isNode: false
httpsPort: 8443
webProxyHost: 10.0.39.39:30666
clusterPort: 6007
# ui service
service:
type: NodePort
httpsPort: 8443
nodePort: 30666
annotations: {}
# loadBalancerIP:
## Load Balancer sources
## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
##
# loadBalancerSourceRanges:
# - 10.10.10.0/24
## OIDC authentication requires "sticky" session on the LoadBalancer for JWT to work properly...but AWS doesn't like it on creation
# sessionAffinity: ClientIP
# sessionAffinityConfig:
# clientIP:
# timeoutSeconds: 10800
10.0.39.39 - is the kubernetes masternode internal ip.
When nifi get started i get follwoing
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/k8sadmin/.kube/config
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /home/k8sadmin/.kube/config
NAME: nifi
LAST DEPLOYED: Thu Nov 25 12:38:00 2021
NAMESPACE: jeed-cluster
STATUS: deployed
REVISION: 1
NOTES:
Cluster endpoint IP address will be available at:
kubectl get svc nifi -n jeed-cluster -o jsonpath='{.status.loadBalancer.ingress[*].ip}'
Cluster endpoint domain name is: 10.0.39.39:30666 - please update your DNS or /etc/hosts accordingly!
Once you are done, your NiFi instance will be available at:
https://10.0.39.39:30666/nifi
and when i do a curl
curl https://10.0.39.39:30666 put sample.txt -k
<h1>System Error</h1>
<h2>The request contained an invalid host header [<code>10.0.39.39:30666</
the request [<code>/</code>]. Check for request manipulation or third-part
t.</h2>
<h3>Valid host headers are [<code>empty
<ul><li>127.0.0.1</li>
<li>127.0.0.1:8443</li>
<li>localhost</li>
<li>localhost:8443</li>
<li>[::1]</li>
<li>[::1]:8443</li>
<li>nifi-0.nifi-headless.jeed-cluste
<li>nifi-0.nifi-headless.jeed-cluste
<li>10.42.0.8</li>
<li>10.42.0.8:8443</li>
<li>0.0.0.0</li>
<li>0.0.0.0:8443</li>
</ul>
Tried lot of things but still cannot whitelist master node ip in
proxy hosts
Ingress is not used
edit: it looks like properties set in values.yaml is not set in nifi.properties in side the pod. Is there any reason for this?
Appreciate help!
As NodePort service you can also assign port number from 30000-32767.
You can apply values when you install your chart with:
properties:
webProxyHost: localhost
httpsPort:
This should let nifi whitelist your https://localhost:

Accessing microk8s API for cluster behind router

I have a microk8s cluster composed of several Raspberry Pi 4, behind a Linksys router.
My computer and the cluster router are connected on my ISP router, and are respectively 192.168.0.10 & 192.168.0.2.
The cluster's subnet is composed of the following :
router : 192.168.1.10
microk8s master : 192.168.1.100 (fixed IP)
microk8s workers : 192.168.1.10X (via DHCP).
I can ssh from my computer to the master via a port forwarding 192.168.0.2:22 > 192.168.1.100:22
I can nmap the cluster via a port forwarding 192.168.0.2:16443 > 192.168.1.100:16443 (16443 being the API port for microk3s)
But I can't call the k8s API :
kubectl cluster-info
returns
Unable to connect to the server: x509: certificate is valid for 127.0.0.1, 10.152.183.1, 192.168.1.100, fc00::16d, fc00::dea6:32ff:fecc:a007, not 192.168.0.2
I've tried using the --insecure-skip-tls-verify, but :
error: You must be logged in to the server (Unauthorized)
My local (laptop) config is the following :
> kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://192.168.0.2:16443
name: default
contexts:
- context:
cluster: default
user: default
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
I'd say I'd like to add 192.168.0.2 to the certificate, but all the answers I can find online refer to the --insecure-skip-tls-verify flag.
Can you help please ?

How to pass wildcard character for kubeadm configuration

We am setting-up HA k8s environment on AWS. We have created a ami where docker, k8s are installed.
HA cluster with 3 master and 5 worker nodes are created with TLS enabled network load balancer. The certificate added to TLS with domain as *.amazonaws.com.
In my cluster ClusterConfiguration file, the controlPlaneEndpoint and certSANs are pointing to DNS of load balancer.
kubeadm installation fails, when checking the docker logs for k8s_kube-scheduler, I see wildcard certificate is not accepted.
Config file.
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
apiServer:
extraArgs:
cloud-provider: aws
clusterName: test
controlPlaneEndpoint: tf-k8s-t1-nlb-34390285259d3aac.elb.us-west-1.amazonaws.com
controllerManager:
extraArgs:
cloud-provider: aws
configure-cloud-routes: "false"
address: 0.0.0.0
kubernetesVersion: v1.13.2
networking:
dnsDomain: cluster.local
podSubnet: 10.168.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler:
extraArgs:
address: 0.0.0.0
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
cloud-provider: aws`
E0318 15:36:20.604025 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: Get https://tf-k8s-t1-nlb-34390285259d3aac.elb.us-west-1.amazonaws.com:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: x509: certificate is valid for *.amazonaws.com, not tf-k8s-t1-nlb-34390285259d3aac.elb.us-west-1.amazonaws.com
Could you help me on how to pass wildcard character in my kubeadm configuration?
wildcard certificates only work for 1 sub-level.
say you have a *.example.com cert,
it is accepted for foo.example.com and foo2.example.com, but not for foo.boo.example.com, you'd need a *.boo.example.com cert for that.

kubectl not working from other host, but works fine from localhost

What happened:
I'm testing kubernetes 1.9.0 to upgrade production cluster and I cannot access it with kubectl from other host.
I'm getting following error:
pods is forbidden: User \"system:anonymous\" cannot list pods in the namespace \"default\"
I tried with admin user and with other user created before with read only role.
What you expected to happen:
Works fine on kubernetes 1.5
How to reproduce it (as minimally and precisely as possible):
I installed kubernetes 1.9.0 with kubeadm.
I can access to local cluster from master with following command:
kubectl --kubeconfig kubeconfig get pods
with server: https://127.0.0.1:6443
I added a rule on haproxy to redirect that port to another, but I do some tests:
Old environment have a proxy configured to all requests asking for https://example.org/api/k8s will be redirect to k8s api endpoint.
I configured this new environment with same configuration but not working. (Error: pods is forbidden: User \"system:anonymous\" cannot list pods in the namespace \"default\" )
I configured this new enviroment with a new DNS name and proxying on tcp mode linking port 443 to 6443, but not working. (Error: pods is forbidden: User \"system:anonymous\" cannot list pods in the namespace \"default\" )
kubeconfig file set server field as: https://k8s.example.org
Anything else we need to know?:
kubeconfig file (kubeconfig for admin user is similar):
`
api
Version: v1
clusters:
- cluster:
certificate-authority-data: ***
server: https://127.0.0.1:6443
#server: https://k.example.org
name: kubernetes
contexts:
- context:
cluster: kubernetes
namespace: default
user: read_only
name: read_only-context
current-context: read_only-context
kind: Config
preferences: {}
users:
- name: read_only
user:
as-user-extra: {}
client-certificate: /etc/kubernetes/users/read_only/read_only.crt
client-key: /etc/kubernetes/users/read_only/read_only.key
user
name: read_only
`
Environment:
Kubernetes version (use kubectl version): 1.9.0
Cloud provider or hardware configuration: bare metal (in fact a VM on AWS)
OS (e.g. from /etc/os-release): Centos 7
Kernel (e.g. uname -a): 3.10.0-514.10.2.el7.x86_64
Install tools: kubeadm
Others: