How to pass wildcard character for kubeadm configuration - kubernetes

We am setting-up HA k8s environment on AWS. We have created a ami where docker, k8s are installed.
HA cluster with 3 master and 5 worker nodes are created with TLS enabled network load balancer. The certificate added to TLS with domain as *.amazonaws.com.
In my cluster ClusterConfiguration file, the controlPlaneEndpoint and certSANs are pointing to DNS of load balancer.
kubeadm installation fails, when checking the docker logs for k8s_kube-scheduler, I see wildcard certificate is not accepted.
Config file.
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
apiServer:
extraArgs:
cloud-provider: aws
clusterName: test
controlPlaneEndpoint: tf-k8s-t1-nlb-34390285259d3aac.elb.us-west-1.amazonaws.com
controllerManager:
extraArgs:
cloud-provider: aws
configure-cloud-routes: "false"
address: 0.0.0.0
kubernetesVersion: v1.13.2
networking:
dnsDomain: cluster.local
podSubnet: 10.168.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler:
extraArgs:
address: 0.0.0.0
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
cloud-provider: aws`
E0318 15:36:20.604025 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: Get https://tf-k8s-t1-nlb-34390285259d3aac.elb.us-west-1.amazonaws.com:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: x509: certificate is valid for *.amazonaws.com, not tf-k8s-t1-nlb-34390285259d3aac.elb.us-west-1.amazonaws.com
Could you help me on how to pass wildcard character in my kubeadm configuration?

wildcard certificates only work for 1 sub-level.
say you have a *.example.com cert,
it is accepted for foo.example.com and foo2.example.com, but not for foo.boo.example.com, you'd need a *.boo.example.com cert for that.

Related

How is Kubernetes Service IP assigned and stored?

I deployed a service myservice to the k8s cluster. Using kubectl describe serivce ..., I can find that the service ip is 172.20.127.114 I am trying to figure out how this service ip is assigned. Is it assigned by K8s controller and stored in DNS? How does K8S control decide on the IP range?
kubectl describe service myservice
Name: myservice
Namespace: default
Labels: app=myservice
app.kubernetes.io/instance=myservice
Annotations: argocd.argoproj.io/sync-wave: 3
Selector: app=myservice
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 172.20.127.114
IPs: 172.20.127.114
Port: <unset> 80/TCP
TargetPort: 5000/TCP
Endpoints: 10.34.188.30:5000,10.34.89.157:5000
Session Affinity: None
Events: <none>
kuebernetes controller accepts service CIDR range using service-cluster-ip-range parameter. Service IP is assigned from this CIDR block.
The kubernetes controller pod name might vary in each environment. update the pod name accordingly
The Pod IP Addresses comes from CNI
Api-server, Etcd, Kube-Proxy, Scheduler and controller-Manager IP
Addresses come from Server/Node IP Address
Service IP address range is defined in the API Server Configuration
If we check API Configuration, we can see the - --service-cluster-ip-range=10.96.0.0/12 option in command section, A CIDR notation IP range from which to assign service cluster IPs:
sudo vim /etc/kubernetes/manifests/kube-apiserver.yaml
See all defaults configurations:
kubeadm config print init-defaults
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 1.2.3.4
bindPort: 6443
nodeRegistration:
criSocket: unix:///var/run/containerd/containerd.sock
imagePullPolicy: IfNotPresent
name: node
taints: null
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: { }
dns: { }
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: 1.24.0
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
scheduler: { }
Change Default CIDR IP Range
You can configure Kube API Server with many different options:
when bootstrapping the cluster via kubeadm init --service-cidr <IP Range>
Change kube-apiserver directly (kubelet periodically scans the configurations for changes)
sudo vim /etc/kubernetes/manifests/kube-apiserver.yaml
Note that with option number 2, you are going to get The connection to the server IP:6443 was refused - did you specify the right host or port?
error for a while, so you have to wait a couple of minutes to kube-apiserver start again...
The new CIDR block only applies for newly created Services, which means
old Services still remain in the old CIDR block, for testing:
kubectl create service clusterip test-cidr-block --tcp 80:80
Then Check the newly created Service...
Credit

Access kubernetes cluster outside of VPN

I configured a kubernetes cluster with rke on premise (for now single node - control plane, worker and etcd).
The VM which I launched the cluster on is inside a VPN.
After succesfully initializng the cluster, I managed to access the cluster with kubectl from inside the VPN.
I tried to access the cluster outside of the VPN so I updated the kubeconfig file and changed the followings:
server: https://<the VM IP> to be server: https://<the external IP>.
I also exposed port 6443.
When trying to access the cluster I get the following error:
E0912 16:23:39 proxy_server.go:147] Error while proxying request: x509: certificate is valid for <the VM IP>, 127.0.0.1, 10.43.0.1, not <the external IP>
My question is, how can I add the external IP to the certificate so I will be able to access the cluster with kubectl outside of the VPN.
The rke configuration yml.
# config.yml
nodes:
- address: <VM IP>
hostname_override: control-plane-telemesser
role: [controlplane, etcd, worker]
user: motti
ssh_key_path: /home/<USR>/.ssh/id_rsa
ssh_key_path: /home/<USR>/.ssh/id_rsa
cluster_name: my-cluster
ignore_docker_version: false
kubernetes_version:
services:
etcd:
backup_config:
interval_hours: 12
retention: 6
snapshot: true
creation: 6h
retention: 24h
kube-api:
service_cluster_ip_range: 10.43.0.0/16
service_node_port_range: 30000-32767
pod_security_policy: false
kube-controller:
cluster_cidr: 10.42.0.0/16
service_cluster_ip_range: 10.43.0.0/16
kubelet:
cluster_domain: cluster.local
cluster_dns_server: 10.43.0.10
fail_swap_on: false
extra_args:
max-pods: 110
network:
plugin: flannel
options:
flannel_backend_type: vxlan
dns:
provider: coredns
authentication:
strategy: x509
authorization:
mode: rbac
ingress:
provider: nginx
options:
use-forwarded-headers: "true"
monitoring:
provider: metrics-server
Thanks,
So I found the solution for RKE cluster configuration.
You to add sans to the the cluster.yml file at the authentication section:
authentication:
strategy: x509
sans:
- "10.18.160.10"
After you saved the file just run rke up again and it will update the cluster.

Provide node name to kubeadm init using config file

I need to provide a specific node name to my master node in kuberenetes. I am using kubeadm to setup my cluster and I know there is an option --node-name master which you can provide to kubeadm init and it works fine.
Now, the issue is I am using the config file to initialise the cluster and I have tried various ways to provide that node-name to the cluster but it is not picking up the name. My config file of kubeadm init is:
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
advertiseAddress: 10.0.1.149
controlPlaneEndpoint: 10.0.1.149
etcd:
endpoints:
- http://10.0.1.149:2379
caFile: /etc/kubernetes/pki/etcd/ca.pem
certFile: /etc/kubernetes/pki/etcd/client.pem
keyFile: /etc/kubernetes/pki/etcd/client-key.pem
networking:
podSubnet: 192.168.13.0/24
kubernetesVersion: 1.10.3
apiServerCertSANs:
- 10.0.1.149
apiServerExtraArgs:
endpoint-reconciler-type: lease
nodeRegistration:
name: master
Now I run kubeadm init --config=config.yaml and it timeouts with following error:
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-
config" in the "kube-system" Namespace
[markmaster] Will mark node ip-x-x-x-x.ec2.internal as master by
adding a label and a taint
timed out waiting for the condition
PS: This issue also comes when you don't provide --hostname-override to kubelet along with --node-name to kubeadm init. I am providing both. Also, I am not facing any issues when I don't use config.yaml file and use command line to provide --node-name option to kubeadm init.
I want to know how can we provide --node-name option in config.yaml file. Any pointers are appreciated.
I am able to resolve this issue using the following config file, Just updating if anyone encounters the same issue:
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
advertiseAddress: 10.0.1.149
controlPlaneEndpoint: 10.0.1.149
etcd:
endpoints:
- http://10.0.1.149:2379
caFile: /etc/kubernetes/pki/etcd/ca.pem
certFile: /etc/kubernetes/pki/etcd/client.pem
keyFile: /etc/kubernetes/pki/etcd/client-key.pem
networking:
podSubnet: 192.168.13.0/24
kubernetesVersion: 1.10.3
apiServerCertSANs:
- 10.0.1.149
apiServerExtraArgs:
endpoint-reconciler-type: lease
nodeName: master
This is the way you can specify --node-name in config.yaml

Fails to run kubeadm init

With reference to https://github.com/kubernetes/kubeadm/issues/1239. How do I configure and start the latest kubeadm successfully?
kubeadm_new.config is generated by config migration:
kubeadm config migrate --old-config kubeadm_default.config --new-config kubeadm_new.config. Content of kubeadm_new.config:
apiEndpoint:
advertiseAddress: 1.2.3.4
bindPort: 6443
apiVersion: kubeadm.k8s.io/v1alpha3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: khteh-t580
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiVersion: kubeadm.k8s.io/v1alpha3
auditPolicy:
logDir: /var/log/kubernetes/audit
logMaxAge: 2
path: ""
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: ""
etcd:
local:
dataDir: /var/lib/etcd
image: ""
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.12.2
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
unifiedControlPlaneImage: ""
I changed "kubernetesVersion: v1.12.2" in kubeadm_new.config and it seems to progress further and now stuck at the following error:
failed to run Kubelet: Running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false.
How do I set fail-swap-on to FALSE to get it going?
Kubeadm comes with a command which prints default configuration, so you can check each of the assigned default values with:
kubeadm config print-default
In your case, if you want to disable swap check in the kubelet, you have to add the following lines to your current kubeadm config:
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
failSwapOn: false
You haven't mentioned why you chose to disable swap.
I wouldn't consider it as a first option - not because memory swap is a bad practice (it is a useful and basic kernel mechanism) but because it seems the Kubelet is not designed to work properly with swap enabled.
K8S is very clear about this topic as you can see in the Kubeadm installation:
Swap disabled. You MUST disable swap in order for the kubelet to work
properly.
I would recommend reading about Evicting end-user Pods and the relevant features that K8S provides to prioritize memory of pods:
1 ) The 3 qos classes - Make sure that your high priority workloads are running with the Guaranteed (or at least Burstable) class.
2 ) Pod Priority and Preemption.

How to automate Let's Encrypt certificate renewal in Kubernetes with cert-manager on a bare-metal cluster?

I would like to access my Kubernetes bare-metal cluster with an exposed Nginx Ingress Controller for TLS termination. To be able to automate certificate renewal, I would like to use the Kubernetes addon cert-manager, which is kube-lego's successor.
What I have done so far:
Set up a Kubernetes (v1.9.3) cluster on bare-metal (1 master, 1 minion, both running Ubuntu 16.04.4 LTS) with kubeadm and flannel as pod network following this guide.
Installed nginx-ingress (chart version 0.9.5) with Kubernetes package manager helm
helm install --name nginx-ingress --namespace kube-system stable/nginx-ingress --set controller.hostNetwork=true,rbac.create=true,controller.service.type=ClusterIP
Installed cert-manager (chart version 0.2.2) with helm
helm install --name cert-manager --namespace kube-system stable/cert-manager --set rbac.create=true
The Ingress Controller is exposed successfully and works as expected when I test with an Ingress resource. For proper Let's Encrypt certificate management and automatic renewal with cert-manager I do first of all need an Issuer resource. I created it from this acme-staging-issuer.yaml:
apiVersion: certmanager.k8s.io/v1alpha1
kind: Issuer
metadata:
name: letsencrypt-staging
namespace: default
spec:
acme:
server: https://acme-staging.api.letsencrypt.org/directory
email: email#example.com
privateKeySecretRef:
name: letsencrypt-staging
http01: {}
kubectl create -f acme-staging-issuer.yaml runs successfully but kubectl describe issuer/letsencrypt-staging gives me:
Status:
Acme:
Uri:
Conditions:
Last Transition Time: 2018-03-05T21:29:41Z
Message: Failed to verify ACME account: Get https://acme-staging.api.letsencrypt.org/directory: tls: oversized record received with length 20291
Reason: ErrRegisterACMEAccount
Status: False
Type: Ready
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ErrVerifyACMEAccount 1s (x11 over 7s) cert-manager-controller Failed to verify ACME account: Get https://acme-staging.api.letsencrypt.org/directory: tls: oversized record received with length 20291
Warning ErrInitIssuer 1s (x11 over 7s) cert-manager-controller Error initializing issuer: Get https://acme-staging.api.letsencrypt.org/directory: tls: oversized record received with length 20291
Without a ready Issuer, I can not proceed to generate cert-manager Certificates or utilse the ingress-shim (for automatic renewal).
What am I missing in my setup? Is it sufficient to expose the ingress controller using hostNetwork=true or is there a better way to expose the its ports 80 and 443 on a bare-metal cluster? How can I resolve tls: oversized record received error when creating a cert-manager Issuer resource?
The tls: oversized record received error was caused by a misconfigured /etc/resolv.conf of the Kubernetes minion. It could be resolved by editing it like this:
$ sudo vi /etc/resolvconf/resolv.conf.d/base
Add nameserver list:
nameserver 8.8.8.8
nameserver 8.8.4.4
Update resolvconf:
$ sudo resolvconf -u