kubeadm doesn't accept controlplane certificatekey in config file - kubernetes

used version:
kubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:56:34Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
To join a new master node to the controlplane using another registry than the public one to download I need to use kubeadm with a "--configfile " parameter command line. I loaded the k8s container images to my registry and tried to use kubeadm accordingly. Unfortunatley in this case kubeadm doesn't accept the "certificatekey" from the config file.
kubeadm join my-k8s-api.de:443 --config kubeadm-join-config.yaml
the config file looks like that:
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: JoinConfiguration
discovery:
bootstrapToken:
apiServerEndpoint: "my-k8s-api.de:443"
caCertHashes:
- "sha256:9a5687aed5397958ebbca1c421ec56356dc4a5394f6846a64b071d56b3b41a7a"
token: "4bh3s7.adon04r87zyh7gwj"
nodeRegistration:
kubeletExtraArgs:
# pause container image
pod-infra-container-image: my-registry.de:5000/pause:3.1
controlPlane:
certificateKey: "eb3abd79fb011ced254f2c834079d0fa2af62718f6b4750a1e5309c36ed40383"```
actually I get back:
W1204 12:47:12.944020 54671 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta1", Kind:"JoinConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "certificateKey"
when I use "kubeadm --control-plane --certificate-key XXXXXXXXX" I can successfully join the master node to the controlplane but that needs the node to have internet access.
any guess?
did I a typo?

You are having this error because you are using apiVersion : kubeadm.k8s.io/v1beta1 which don't have this kind of field. I have find out that when I was going thru the v1beta1 docs. You can have a look yourself more details: v1beta1
So what you need to do is switch your apiVersion to:
apiVersion: kubeadm.k8s.io/v1beta2
Which have that required filed type. For details please check v1beta2

Related

Kubernetes Configmap Error validating data: unknown

This is my config kubernetes to create the configmap
kind: ConfigMap
apiVersion: v1
metadata:
name: fileprocessing-acracbsscan-configmap
data:
SCHEDULE_RUNNING_TIME: '20'
The kubectl version:
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.0", GitCommit:"d3ada0119e776222f11ec7945e6d860061339aad", GitTreeState:"clean", BuildDate:"2017-06-29T23:15:59Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
But I have the error: error validating data: unknown; if you choose to ignore these errors, turn validation off with --validate=false
I don't know the unknown error and how can I resolve it
First of all, You did not publish the deployment, So to be clear, the Deployment.yaml should reference this config map
envFrom:
- configMapRef:
    name: fileprocessing-acracbsscan-configmap
Also, I'm not sure but try to put the data on quotes.
"SCHEDULE_RUNNING_TIME": "20"
Use kubectl describe po <pod_name> -n <namespace> in order to get a clearer view on what's the status of the failure.
Try also kubectl get events -n <namespace> To get all events of current namespace, maybe it will clear the reason also.

Unable to Change Kubectl Context to my Google Kubernetes Cluster

I've created a Google Kubernetes Engine Cluster through the Cloud Console. Now I want to connect to it using kubectl from my local machine.
A few weeks ago I used
gcloud container clusters get-credentials cents-ideas --zone europe-west3-a --project cents-ideas
as provided by the Cloud Console.
The output after running this command is:
Fetching cluster endpoint and auth data.
kubeconfig entry generated for cents-ideas.
But the cluster is neither in kubectl config get-contexts nor set as kubectl config current-context. I am confused because this command used to work and did nothing different.
My kubectl version:
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:30:10Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:22:30Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}
and gcloud version
Google Cloud SDK 278.0.0
alpha 2020.01.24
beta 2020.01.24
bq 2.0.52
core 2020.01.24
gsutil 4.47
kubectl 2020.01.24
cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <LONG HASH>
server: https://35.234.108.15
name: gke_cents-ideas_europe-west3-a_cents-ideas
contexts:
- context:
cluster: gke_cents-ideas_europe-west3-a_cents-ideas
user: gke_cents-ideas_europe-west3-a_cents-ideas
name: gke_cents-ideas_europe-west3-a_cents-ideas
current-context: gke_cents-ideas_europe-west3-a_cents-ideas
kind: Config
preferences: {}
users:
- name: gke_cents-ideas_europe-west3-a_cents-ideas
user:
auth-provider:
config:
access-token: <SOME TOKEN>
cmd-args: config config-helper --format=json
cmd-path: /snap/google-cloud-sdk/115/bin/gcloud
expiry: "2020-02-02T09:45:19Z"
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
Output of kubectl get nodes
NAME STATUS ROLES AGE VERSION
flolubuntu Ready <none> 42d v1.17.2
I had microk8s installed. Removing it and instead installing snap install kubectl fixed my issue.

Secret in version "v1" cannot be handled as a Secret: v1.Secret.Data: ReadMapCB: expect { or n, but found ", error found in #10 byte of

I 'm trying to set up private docker image registry for kubernetes cluster. I'm following this link
$ cat ~/.docker/config.json | base64
ewoJImF1dGhzIjogewoJCSJsb2NhbGhvc3Q6NTAwMDAiOiB7CgkJCSJhdXRoIjogImJYbDFjMlZ5
T21oMGNHRnpjM2RrIgoJCX0KCX0KfQ==
I have file image-registry-secrets.yaml with below contents -
apiVersion: v1
kind: Secret
metadata:
name: registrypullsecret
data:
.dockerconfigjson:ewoJImF1dGhzIjogewoJCSJsb2NhbGhvc3Q6NTAwMDAiOiB7CgkJCSJhdXRoIjogImJYbDFjMlZ5T21oMGNHRnpjM2RrIgoJCX0KCX0KfQ==
type: kubernetes.io/dockerconfigjson
And when I run the below command
$kubectl create -f image-registry-secrets.yaml --validate=false && kubectl get secrets
Error from server (BadRequest): error when creating "image-registry-secrets.yml": Secret in version "v1" cannot be handled as a Secret: v1.Secret.Data: ReadMapCB: expect { or n, but found ", error found in #10 byte of ...|","data":".dockercon|..., bigger context ...|{"apiVersion":"v1","data":".dockerconfigjson:ewoJImF1dGhzIjogewoJCSJsb2NhbGhv|...
What is the issue in kubectl create -f image-registry-secrets.yaml --validate=false and how can I resolve this error.
Kubernetes version is -
$kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T21:04:45Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-02-28T13:30:26Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
You need to include a space after .dockerconfigjson, and before the base64 string - after that it should work.
When you paste the base64 password, it splits the password between a few lines and adds spaces in between the lines. It is difficult to explain and there is no need to add space after .dockerconfigjson, as the yaml provided in the tutorial is correct. The problem occurs after pasting the base64-encoded-json.
Open the secret in Vim and run:
:set listchars+=space:␣ and then :set list
this will show all the spaces as ␣, check if there is none between the password lines. This worked in my case.
Update:
The Vim command does not always show the spaces, so just navigate to the begging of each line of your secret key and press backspace so they connect.

EKS Kubernetes user with RBAC seen as system:anonymous

I've been following this post to create user access to my kubernetes cluster (running on Amazon EKS). I did create key, csr, approved the request and downloaded the certificate for the user. Then I did create a kubeconfig file with the key and crt. When I run kubectl with this kubeconfig, I'm recognized as system:anonymous.
$ kubectl --kubeconfig test-user-2.kube.yaml get pods
Error from server (Forbidden): pods is forbidden: User "system:anonymous" cannot list pods in the namespace "default"
I expected the user to be recognized but get denied access.
$ kubectl --kubeconfig test-user-2.kube.yaml version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-18T11:37:06Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-28T20:13:43Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
$ kubectl --kubeconfig test-user-2.kube.yaml config view
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: REDACTED
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: test-user-2
name: kubernetes
current-context: kubernetes
kind: Config
preferences: {}
users:
- name: test-user-2
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
# running with my other account (which uses heptio-authenticator-aws)
$ kubectl describe certificatesigningrequest.certificates.k8s.io/user-request-test-user-2
Name: user-request-test-user-2
Labels: <none>
Annotations: <none>
CreationTimestamp: Wed, 01 Aug 2018 15:20:15 +0200
Requesting User:
Status: Approved,Issued
Subject:
Common Name: test-user-2
Serial Number:
Events: <none>
I did create a ClusterRoleBinding with admin (also tried cluster-admin) roles for this user but that should not matter for this step. I'm not sure how I can further debug 1) if the user is created or not or 2) if I missed some configuration.
Any help is appreciated!
As mentioned in this article:
When you create an Amazon EKS cluster, the IAM entity user or role (for example, for federated users) that creates the cluster is automatically granted system:master permissions in the cluster's RBAC configuration. To grant additional AWS users or roles the ability to interact with your cluster, you must edit the aws-auth ConfigMap within Kubernetes.
Check if you have aws-auth ConfigMap applied to your cluster:
kubectl describe configmap -n kube-system aws-auth
If ConfigMap is present, skip this step and proceed to step 3.
If ConfigMap is not applied yet, you should do the following:
Download the stock ConfigMap:
curl -O https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/aws-auth-cm.yaml
Adjust it using your NodeInstanceRole ARN in the rolearn: . To get NodeInstanceRole value check out this manual and you will find it at steps 3.8 - 3.10.
data:
mapRoles: |
- rolearn: <ARN of instance role (not instance profile)>
Apply this config map to the cluster:
kubectl apply -f aws-auth-cm.yaml
Wait for cluster nodes becoming Ready:
kubectl get nodes --watch
Edit aws-auth ConfigMap and add users to it according to the example below:
kubectl edit -n kube-system configmap/aws-auth
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
mapRoles: |
- rolearn: arn:aws:iam::555555555555:role/devel-worker-nodes-NodeInstanceRole-74RF4UBDUKL6
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
mapUsers: |
- userarn: arn:aws:iam::555555555555:user/admin
username: admin
groups:
- system:masters
- userarn: arn:aws:iam::111122223333:user/ops-user
username: ops-user
groups:
- system:masters
Save and exit the editor.
Create kubeconfig for your IAM user following this manual.
I got this back from AWS support today.
Thanks for your patience. I have just heard back from the EKS team. They have confirmed that the aws-iam-authenticator has to be used with EKS and, because of that, it is not possible to authenticate using certificates.
I haven't heard whether this is expected to be supported in the future, but it is definitely broken at the moment.
This seems to be a limitation of EKS. Even though the CSR is approved, user can not authenticate. I used the same procedure on another kubernetes cluster and it worked fine.

Kubernetes ud615 newbie secure-monolith.yaml `error validating data`?

I'm a Kubernetes newbie trying to follow along with the Udacity tutorial class linked on the Kubernetes website.
I execute
kubectl create -f pods/secure-monolith.yaml
That is referencing this official yaml file: https://github.com/udacity/ud615/blob/master/kubernetes/pods/secure-monolith.yaml
I get this error:
error: error validating "pods/secure-monolith.yaml": error validating data: unknown object type schema.GroupVersionKind{Group:"", Version:"v1", Kind:"Pod"}; if you choose to ignore these errors, turn validation off with --validate=false
FYI, the official lesson link is here: https://classroom.udacity.com/courses/ud615/lessons/7824962412/concepts/81991020770923
My first guess is that the provided yaml is out of date and incompatible with the current Kubernetes. Is this right? How can I fix/update?
I ran into the exact same problem but with a much simpler example.
Here's my yaml:
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- image: nginx
ports:
- containerPort: 80
The command kubectl create -f pod-nginx.yaml returns:
error: error validating "pod-nginx.yaml": error validating data: unknown object type schema.GroupVersionKind{Group:"", Version:"v1", Kind:"Pod"}; if you choose to ignore these errors, turn validation off with --validate=false
As the error says, I am able to override it but I am still at a loss as to the cause of the original issue.
Local versions:
Ubuntu 16.04
minikube version: v0.22.2
kubectl version: 1.8
Thanks in advance!
After correct kubectl version (Same with server version), then the issue is fixed, see:
$ kubectl create -f config.yml
configmap "test-cfg" created
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"7", ...
Server Version: version.Info{Major:"1", Minor:"7", ...
This is the case before modification:
$ kubectl create -f config.yml
error: error validating "config.yml": error validating data: unknown object type schema.GroupVersionKind{Group:"", Version:"v1", Kind:"ConfigMap"}; if you choose to ignore these errors, turn validation off with --validate=false
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8",...
Server Version: version.Info{Major:"1", Minor:"7",...
In general, we should used same version for kubectl and kubernetes.