Unable to Change Kubectl Context to my Google Kubernetes Cluster - kubernetes

I've created a Google Kubernetes Engine Cluster through the Cloud Console. Now I want to connect to it using kubectl from my local machine.
A few weeks ago I used
gcloud container clusters get-credentials cents-ideas --zone europe-west3-a --project cents-ideas
as provided by the Cloud Console.
The output after running this command is:
Fetching cluster endpoint and auth data.
kubeconfig entry generated for cents-ideas.
But the cluster is neither in kubectl config get-contexts nor set as kubectl config current-context. I am confused because this command used to work and did nothing different.
My kubectl version:
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:30:10Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:22:30Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}
and gcloud version
Google Cloud SDK 278.0.0
alpha 2020.01.24
beta 2020.01.24
bq 2.0.52
core 2020.01.24
gsutil 4.47
kubectl 2020.01.24
cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <LONG HASH>
server: https://35.234.108.15
name: gke_cents-ideas_europe-west3-a_cents-ideas
contexts:
- context:
cluster: gke_cents-ideas_europe-west3-a_cents-ideas
user: gke_cents-ideas_europe-west3-a_cents-ideas
name: gke_cents-ideas_europe-west3-a_cents-ideas
current-context: gke_cents-ideas_europe-west3-a_cents-ideas
kind: Config
preferences: {}
users:
- name: gke_cents-ideas_europe-west3-a_cents-ideas
user:
auth-provider:
config:
access-token: <SOME TOKEN>
cmd-args: config config-helper --format=json
cmd-path: /snap/google-cloud-sdk/115/bin/gcloud
expiry: "2020-02-02T09:45:19Z"
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
Output of kubectl get nodes
NAME STATUS ROLES AGE VERSION
flolubuntu Ready <none> 42d v1.17.2

I had microk8s installed. Removing it and instead installing snap install kubectl fixed my issue.

Related

kubectl commands fail with Unable to connect to the server: x509: certificate signed by unknown authority

I have a gke cluster setup. version 1.24
I copy the login command from the GKE UI:
gcloud container clusters get-credentials gke-123 --zone us-east1-b --project <project>
Fetching cluster endpoint and auth data.
kubeconfig entry generated for gke-123.
running any kubectl command (including version) yields:
kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.5", GitCommit:"c285e781331a3785a7f436042c65c5641ce8a9e9", GitTreeState:"clean", BuildDate:"2022-03-16T15:58:47Z", GoVersion:"go1.17.8", Compiler:"gc", Platform:"darwin/arm64"}
Unable to connect to the server: x509: certificate signed by unknown authority
gcloud version
Google Cloud SDK 415.0.0
bq 2.0.84
core 2023.01.20
gcloud-crc32c 1.0.0
gke-gcloud-auth-plugin 0.4.0
gsutil 5.18
kubeconfig entry:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: ...
server: https://...
name: ...
contexts:
- context:
cluster: ...
user: ...
name: ...
current-context: ...
kind: Config
preferences: {}
users:
- name: ...
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
command: gke-gcloud-auth-plugin
installHint: Install gke-gcloud-auth-plugin for use with kubectl by following
https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
provideClusterInfo: true
certificate-authority-data seems to be valid.
Appreciate help in resolving this.

Including extra flags in the apiserver manifest file in kubernetes v1.21.0 does not seem to have any effect

I am trying to add the two flags below to apiserver in the /etc/kubernetes/manifests/kube-apiserver.yaml file:
spec:
containers:
- command:
- kube-apiserver
- --enable-admission-plugins=NodeRestriction,PodNodeSelector
- --admission-control-config-file=/vagrant/admission-control.yaml
[...]
I am not mounting a volume or mount point for the /vagrant/admission-control.yaml file. It is completely accessible from the node master, since it is shared by the VM created by vagrant:
vagrant#master-1:~$ cat /vagrant/admission-control.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- name: PodNodeSelector
path: /vagrant/podnodeselector.yaml
vagrant#master-1:~$
Kubernetes version:
vagrant#master-1:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:25:06Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
Link to the /etc/kubernetes/manifests/kube-apiserver.yaml file being used by the running cluster Here
vagrant#master-1:~$ kubectl delete pods kube-apiserver-master-1 -n kube-system
pod "kube-apiserver-master-1" deleted
Unfortunately "kubectl describe pods kube-apiserver-master-1 -n kube-system" only informs that the pod has been recreated. Flags do not appear as desired. No errors reported.
Any suggestion will be helpful,
Thank you.
NOTES:
I also tried to make a patch on the apiserver's configmap.
The patch is applied, but it does not take effect in the new
running pod.
I also tried to pass the two flags in a file via kubeadm
init --config, but there is little documentation on how to put these
two flags and all the other ones of the apiserver that I need in a configuration file in order to reinstall the master node.
UPDATE:
I hope that be useful for everyone facing the same issue...
After 2 days of searching the internet, and lots and lots of tests, I only managed to make it work with the procedure below:
sudo tee ${KUBEADM_INIT_CONFIG_FILE} <<EOF
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: "${INTERNAL_IP}"
bindPort: 6443
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: ${KUBERNETES_VERSION}
controlPlaneEndpoint: "${LOADBALANCER_ADDRESS}:6443"
networking:
podSubnet: "10.244.0.0/16"
apiServer:
extraArgs:
advertise-address: ${INTERNAL_IP}
enable-admission-plugins: NodeRestriction,PodNodeSelector
admission-control-config-file: ${ADMISSION_CONTROL_CONFIG_FILE}
extraVolumes:
- name: admission-file
hostPath: ${ADMISSION_CONTROL_CONFIG_FILE}
mountPath: ${ADMISSION_CONTROL_CONFIG_FILE}
readOnly: true
- name: podnodeselector-file
hostPath: ${PODNODESELECTOR_CONFIG_FILE}
mountPath: ${PODNODESELECTOR_CONFIG_FILE}
readOnly: true
EOF
sudo kubeadm init phase control-plane apiserver --config=${KUBEADM_INIT_CONFIG_FILE}
You need to create a hostPath volume mount like below
volumeMounts:
- mountPath: /vagrant
name: admission
readOnly: true
...
volumes:
- hostPath:
path: /vagrant
type: DirectoryOrCreate
name: admission

kubeadm doesn't accept controlplane certificatekey in config file

used version:
kubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:56:34Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
To join a new master node to the controlplane using another registry than the public one to download I need to use kubeadm with a "--configfile " parameter command line. I loaded the k8s container images to my registry and tried to use kubeadm accordingly. Unfortunatley in this case kubeadm doesn't accept the "certificatekey" from the config file.
kubeadm join my-k8s-api.de:443 --config kubeadm-join-config.yaml
the config file looks like that:
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: JoinConfiguration
discovery:
bootstrapToken:
apiServerEndpoint: "my-k8s-api.de:443"
caCertHashes:
- "sha256:9a5687aed5397958ebbca1c421ec56356dc4a5394f6846a64b071d56b3b41a7a"
token: "4bh3s7.adon04r87zyh7gwj"
nodeRegistration:
kubeletExtraArgs:
# pause container image
pod-infra-container-image: my-registry.de:5000/pause:3.1
controlPlane:
certificateKey: "eb3abd79fb011ced254f2c834079d0fa2af62718f6b4750a1e5309c36ed40383"```
actually I get back:
W1204 12:47:12.944020 54671 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta1", Kind:"JoinConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "certificateKey"
when I use "kubeadm --control-plane --certificate-key XXXXXXXXX" I can successfully join the master node to the controlplane but that needs the node to have internet access.
any guess?
did I a typo?
You are having this error because you are using apiVersion : kubeadm.k8s.io/v1beta1 which don't have this kind of field. I have find out that when I was going thru the v1beta1 docs. You can have a look yourself more details: v1beta1
So what you need to do is switch your apiVersion to:
apiVersion: kubeadm.k8s.io/v1beta2
Which have that required filed type. For details please check v1beta2

EKS Kubernetes user with RBAC seen as system:anonymous

I've been following this post to create user access to my kubernetes cluster (running on Amazon EKS). I did create key, csr, approved the request and downloaded the certificate for the user. Then I did create a kubeconfig file with the key and crt. When I run kubectl with this kubeconfig, I'm recognized as system:anonymous.
$ kubectl --kubeconfig test-user-2.kube.yaml get pods
Error from server (Forbidden): pods is forbidden: User "system:anonymous" cannot list pods in the namespace "default"
I expected the user to be recognized but get denied access.
$ kubectl --kubeconfig test-user-2.kube.yaml version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-18T11:37:06Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-28T20:13:43Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
$ kubectl --kubeconfig test-user-2.kube.yaml config view
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: REDACTED
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: test-user-2
name: kubernetes
current-context: kubernetes
kind: Config
preferences: {}
users:
- name: test-user-2
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
# running with my other account (which uses heptio-authenticator-aws)
$ kubectl describe certificatesigningrequest.certificates.k8s.io/user-request-test-user-2
Name: user-request-test-user-2
Labels: <none>
Annotations: <none>
CreationTimestamp: Wed, 01 Aug 2018 15:20:15 +0200
Requesting User:
Status: Approved,Issued
Subject:
Common Name: test-user-2
Serial Number:
Events: <none>
I did create a ClusterRoleBinding with admin (also tried cluster-admin) roles for this user but that should not matter for this step. I'm not sure how I can further debug 1) if the user is created or not or 2) if I missed some configuration.
Any help is appreciated!
As mentioned in this article:
When you create an Amazon EKS cluster, the IAM entity user or role (for example, for federated users) that creates the cluster is automatically granted system:master permissions in the cluster's RBAC configuration. To grant additional AWS users or roles the ability to interact with your cluster, you must edit the aws-auth ConfigMap within Kubernetes.
Check if you have aws-auth ConfigMap applied to your cluster:
kubectl describe configmap -n kube-system aws-auth
If ConfigMap is present, skip this step and proceed to step 3.
If ConfigMap is not applied yet, you should do the following:
Download the stock ConfigMap:
curl -O https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/aws-auth-cm.yaml
Adjust it using your NodeInstanceRole ARN in the rolearn: . To get NodeInstanceRole value check out this manual and you will find it at steps 3.8 - 3.10.
data:
mapRoles: |
- rolearn: <ARN of instance role (not instance profile)>
Apply this config map to the cluster:
kubectl apply -f aws-auth-cm.yaml
Wait for cluster nodes becoming Ready:
kubectl get nodes --watch
Edit aws-auth ConfigMap and add users to it according to the example below:
kubectl edit -n kube-system configmap/aws-auth
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
mapRoles: |
- rolearn: arn:aws:iam::555555555555:role/devel-worker-nodes-NodeInstanceRole-74RF4UBDUKL6
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
mapUsers: |
- userarn: arn:aws:iam::555555555555:user/admin
username: admin
groups:
- system:masters
- userarn: arn:aws:iam::111122223333:user/ops-user
username: ops-user
groups:
- system:masters
Save and exit the editor.
Create kubeconfig for your IAM user following this manual.
I got this back from AWS support today.
Thanks for your patience. I have just heard back from the EKS team. They have confirmed that the aws-iam-authenticator has to be used with EKS and, because of that, it is not possible to authenticate using certificates.
I haven't heard whether this is expected to be supported in the future, but it is definitely broken at the moment.
This seems to be a limitation of EKS. Even though the CSR is approved, user can not authenticate. I used the same procedure on another kubernetes cluster and it worked fine.

Kubenates RunAsUser is forbidden

when I try to create a pods with non-root fsgroup (here 2000)
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
securityContext:
runAsUser: 1000
fsGroup: 2000
volumes:
- name: sec-ctx-vol
emptyDir: {}
containers:
- name: sec-ctx-demo
image: gcr.io/google-samples/node-hello:1.0
volumeMounts:
- name: sec-ctx-vol
mountPath: /data/demo
securityContext:
allowPrivilegeEscalation: true
hitting error
Error from server (Forbidden): error when creating "test.yml": pods "security-context-demo" is forbidden: pod.Spec.SecurityContext.RunAsUser is forbidden
Version
root#ubuntuguest:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:22:21Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:10:24Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Can any one help me how to set ClusterRoleBinding in cluster.
If the issue is indeed because of RBAC permissions, then you can try creating a ClusterRoleBinding with cluster role as explained here.
Instead of the last step in that post (using the authentication token to log in to dashboard), you'll have to use that token and the config in your kubectl client when creating the pod.
For more info on the use of contexts, clusters, and users visit here
Need to disable admission plugins SecurityContextDeny while setting up Kube-API
On Master node
ps -ef | grep kube-apiserver
And check enable plugins
--enable-admission-plugins=LimitRanger,NamespaceExists,NamespaceLifecycle,ResourceQuota,ServiceAccount,DefaultStorageClass,MutatingAdmissionWebhook,DenyEscalatingExec
Ref: SecurityContextDeny
cd /etc/kubernetes
cp apiserver.conf apiserver.conf.bak
vim apiserver.conf
find SecurityContextDeny keywords and delete it.
:wq
systemctl restart kube-apiserver
then fixed it