How to change Kubernetes configuration key file from controller-manager and apiserver to point to the same x509 RSA key? - kubernetes

I am having tiller panic crash as detailed in helm FAQ (https://docs.helm.sh/using_helm/) under the question
Q: Tiller crashes with a panic:
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation...
The FAQ answer suggests
To fix this, you will need to change your Kubernetes configuration.
Make sure that --service-account-private-key-file from
controller-manager and --service-account-key-file from apiserver point
to the same x509 RSA key.
I've tried to search online and read the docs at (https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/) which states
You must pass a service account private key file to the token
controller in the controller-manager by using the
--service-account-private-key-file option. The private key will be used to sign generated service account tokens. Similarly, you must
pass the corresponding public key to the kube-apiserver using the
--service-account-key-file option. The public key will be used to verify the tokens during authentication.
and the docs at https://kubernetes.io/docs/reference/access-authn-authz/authentication/
The docs explain concepts well, but no specifics on actually changing the config.
How do I change my Kubernetes configuration as the FAQ answer suggests?
Make sure that --service-account-private-key-file from
controller-manager and --service-account-key-file from apiserver point
to the same x509 RSA key.
Details:
using kops and gossip based k8s cluster

I have found through trial and error that helm ls seems to have a bug for my setup, or maybe helm ls is expected to show the error if there are no releases to be shown.
Right after helm init, helm ls would show the error panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation...,
but if you just went ahead and used helm,
e.g. helm install stable/<package>, the chart is deployed successfully.
And after the charts are deployed, calling helm ls no longer returns the error, and correctly shows the deployed charts (releases).
I have confirmed it using kubectl get pods, kubectl get services and kubectl get deployments, the released objects are all running.

Related

What causes x509 cert unknown authority in some Kubernetes clusters when using the Hashicorp Vault CLI?

I'm trying to deploy an instance of HashiCorp Vault with TLS and integrated storage using the official Helm chart. I've run through the official tutorial using minikube without any issues. I also tested this tutorial with a cluster created with kind. The tutorial went as expected on both minikube and kind, however when I tried on a production cluster created with TKGI (Tanzu Kubernetes Grid Integrated) I ran into x509 errors running vault commands in the server pods. I can get by some of them by using -tls-skip-verify, but what may be different between these two clusters to cause the warning? It seems to be causing additional problems when I try to join the replicas to the raft pool.
Here's an example showing the x509 error,
bash-3.2$ kubectl exec -n vault vault-0 -- vault operator init \
> -key-shares=1 \
> -key-threshold=1 \
> -format=json > /tmp/vault/cluster-keys.json
Get "https://127.0.0.1:8200/v1/sys/seal-status": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "ca")
Is there something that could be updated on the TKGI clusters so that these x509 errors could be avoided?

How to configure the cluster to provide signing

I have deployed DigitalOcean Chatwoot. Now I need to add the certificate created in the cluster config so that it can be accessed using HTTPS.
I am following this documentation, to manage TLS Certificates in the Kubernetes Cluster.
After following the steps, mentioned in the documentation, I got these
But, as mentioned in the last point of the documentation, I need to add the certificate keys in the Kubernetes controller manager. To do that I used this command
kube-controller-manager --cluster-signing-cert-file ca.pem --cluster-signing-key-file ca-key.pem
But I am getting the below error :
I0422 16:02:37.871541 1032467 serving.go:348] Generated self-signed cert in-memory
W0422 16:02:37.871592 1032467 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
W0422 16:02:37.871598 1032467 client_config.go:622] error creating inClusterConfig, falling back to default config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
Can anyone please tell me how can I configure the cluster to provide security (HTTPS), to the cluster?

Kops' Kubelet PKI management

[meta: there is more Kubernetes question activity here on SO than Serverfault so asking here even though it's not a programming question. Please flag for migration if asking here is inappropriate]
Summary: How are kubelet certificates signed in Kops? At least in our environment they seem to be using a CA per node.
Details
Inspecting the SSL certs of a kubelet endpoint in our Kops deployment (Kops is managing Kubernetes v1.12.9) I see the following cert details
subject=CN = ip-10-1-2-3.ec2.internal#1561089780
issuer=CN = ip-10-1-2-3.ec2.internal-ca#1561089780
Note the issuer appears specific to that node. How does the api-server actually speak to the kubelets? Surely auth would fail due to the unknown (to api-server) CA. But it is obviously working because the cluster is functional but I don't understand why.
In contrast, for learning purposes, I set up a cluster manually and there the Kubelet cert subject and issuers are:
subject=CN=system:node:worker-1
issuer=CN=Kubernetes
(Some location boilerplate omitted)
Which, as I'd expect, has a common CA signing all Kubelet certs - the api-server then uses that CA with --client-ca-file to enable auth to the Kubelets.
The reason for this, in my case, was because the Kubelets are being authorized via a webhook so the certs don't come into play.

Kubernetes: security context and IPC_LOCK capability

I'm trying to install a helm package that needs IPC_LOCK capability. So, I'm getting this message:
Error creating: pods "pod..." is forbidden: unable to validate against any security context constraint: [capabilities.add: Invalid value: "IPC_LOCK": capability may not be added capabilities.add: Invalid value: "IPC_LOCK": capability may not be added]
You can see the DeploymentConfig here.
I'm installing Vault using a Helm chart, so I'm not able to change DeploymentConfig.
I guess the only way to get it would be using a service account with an scc associated allowing it to perform the container.
How could I solve that?
I haven't worked on vault yet, so my answers might not be accurate.
But I think you can remove that capability and disable m_lock in vault config.
https://www.vaultproject.io/docs/configuration/index.html#disable_mlock
Having said that, I don't think kubernetes supports memory swapping anyway (someone needs to verify this) therefore a syscall to mlock might not be needed.

Not Able To Create Pod in Kubernetes

I followed the official Kubernetes installation guide to install Kubernetes on Fedora 22 severs. Everything works out for me during the installation .
After the installation. I could see all my nodes are up-running and connected to the master. However, it kept failing while I try to create a simple pod, according to the 101 guide.
$ create -f pod-nginx.yaml
Error from server: error when creating "pod-nginx.yaml": Pod "nginx" is forbidden: no API token found for service account default/default, retry after the token is automatically created and added to the service account
Do I need to create a API token? If yes, how?
I googled the issue, but without any helpful results. Looks like I am the only one hit into the issue on this planet.
Dose anyone have ideas on this?
The ServiceAccount admission controller prevents pods from being created until their service account in their namespace is initialized.
If the controller-manager is started with the appropriate arguments, it will automatically populate namespaces with a default service account, and auto-create the API token for that service account.
It looks like that guide needs to be updated with the information from this comment:
https://github.com/GoogleCloudPlatform/kubernetes/issues/11355#issuecomment-127378691
openssl genrsa -out /tmp/serviceaccount.key 2048
vim /etc/kubernetes/apiserver:
KUBE_API_ARGS="--service_account_key_file=/tmp/serviceaccount.key"
vim /etc/kubernetes/controller-manager
KUBE_CONTROLLER_MANAGER_ARGS="--service_account_private_key_file=/tmp/serviceaccount.key"
systemctl restart kube-controller-manager.service