Get public key for new cert from k3s - kubernetes

So I got locked out of my Kubernetes instance, presumably due to a cert expiration. It was created with k3sup, by someone with a lot more Kubernetes experience than me.
To dig into the issue, I used AWS session manager to connect to the instance. When I ran sudo kubectl get pods -A from within the instance, I got the same error as I got from my local machine:
error: You must be logged in to the server (Unauthorized)
I then ran sudo systemctl restart k3s to restart the kubernetes, and which supposedly rotates the certs. Now kubectl commands work from within the container, which is great, but still not from my local machine.
If this did rotate the cert as I assume, I think I need the new public key for my local ~/.kube/config. Where do I get this?

I found the updated certs in the kube config at /etc/rancher/k3s/k3s.yaml

Related

I tried running a pod with nginx image. I am inside my VM, but the pod is not getting created

I have installed OracleVM, and created an Ubuntu machine. I have not tried any major cluster or tried deploying anything, I have started reading about Kubernetes and as an example just tried creating a simple pod. But I am getting host error, can anyone tell me where am I going wrong?
I have tried the simple kubectl run command
The snip of my issue
You need to disable swap on the system for kubelet to work. You can disable swap with sudo swapoff -a and restart kubelet service sudo systemctl restart kubelet

Rados Gateway installation (CEPH)

I have installed CEPH cluster using cephadm (octopus version)
Now I’m having problems installing rados gateway for CEPH cluster using this instruction:
https://docs.ceph.com/en/latest/man/8/radosgw/
I’m following each step, but at the end command:
sudo /etc/init.d/ceph-radosgw start
not working as this script could not be found
So I’m running:
systemctl start ceph-radosgw.target
And it helps, then checking the status of the service shows that it’s running.
But I don’t see any gateway in UI and radosgw-admin just hangs for infitity so I cannot create users. Also there is no logs erroring.
Is there someone who faced the same problem?
Maybe there is something I have to check and do additionally? Also when I run above commands it says that monitor configuration is not found, is it related issues?

After reboot Centos7 server , when run kubectl get pod get error: the connection to the server localhost:8080 was refused

What happened:
when I reboot the centos7 server and run get pod, see below error:
The connection to the server localhost:8080 was refused - did you specify the right host or port? What you expected to happen:
before I reboot the system, the Kubernetes have three nodes, and pods/service/,.. all working fine.
How to reproduce it (as minimally and precisely as possible):
reboot the server
kubectl get pod
Anything else we need to know?
I even used sudo kubeadm reset and init again but the issue still exists!
There are few things to consider:
kubeadm reset performs a best effort revert of changes made by kubeadm init or kubeadm join. So some configurations may stay on the cluster.
Make sure you run kubectl as a proper user. You might need to copy the admin.conf to .kube/config dir of the user's home directory.
After kubeadm init you need to run the following commands:
sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf
Make sure you do so.
Check Centos' firewall configuration. After the restart it might go back to defaults.
Please let me know if that helped.

Kubernetes ssh into nodes not working in local

How to ssh to the node inside the cluster in local. I am using docker edge version which has kubernetes inbuilt. If i run
kubectl ssh node
I am getting
Error: unknown command "ssh" for "kubectl"
Did you mean this?
set
Run 'kubectl --help' for usage.
error: unknown command "ssh" for "kubectl"
Did you mean this?
set
There is no "ssh" command in kubectl yet, but there are plenty of options to access Kubernetes node shell.
In case you are using cloud provider, you are able to connect to nodes directly from instances management interface.
For example, in GCP: Select Menu -> Compute Engine -> VM instances, then press SSH button on the left side of the desired node instance.
In case of using local VM (VMWare, Virtualbox), you can configure sshd before rolling out Kubernetes cluster, or use VM console, which is available from management GUI.
Vagrant provides its own command to access VMs - vagrant ssh
In case of using minikube, there is minikube ssh command to connect to minikube VM. There are also other options.
I found no simple way to access docker-for-desktop VM, but you can easily switch to minikube for experimenting with node settings.
How to ssh to the node inside the cluster in local
Kubernetes is aware of nodes on level of secure communication with kubelets on nodes (geting hostname and ip from node), and as such, does not provide cluster-level ssh to nodes out of the box. Depending on your actual provide/setup there are different ways of connecting to nodes and they all boil down to locate your ssh key, open appropriate ports on firewall/security groups and issue ssh -i key user#node_instance_ip command to access node. If you are running locally with virtual machines you can setup your own ssh keypairs and do the trick..
You can effectively shell into a pod using exec(I know its not exactly what the question asks, but might be helpful).
An example usage would be kubectl exec -it name-of-your-pod -- /bin/bash. assuming you have bash installed.
Hope that helps.
You have to first Extend kubectl with plugins adding https://github.com/luksa/kubectl-plugins.
Basically, to "install" ssh, e.g.:
wget https://raw.githubusercontent.com/luksa/kubectl-plugins/master/kubectl-ssh
Then make sure the file is in kubectl-ssh your path.

Minikube on Windows with VirtualBox: Connection attempt fail

I got Kubernetes Minikube on my laptop (4cores, 8 GB RAM). I just performed the basic installation steps (got miniKube and kubectl, enabled the BIOS virtualization) and I am able to start the cluster:
C:\Users\me>minikube start
Starting local Kubernetes cluster...
Starting VM...
SSH-ing files into VM...
Setting up certs...
Starting cluster components...
Connecting to cluster...
Setting up kubeconfig...
Kubectl is now configured to use the cluster.
However, when I try to interact with the cluster, I allways get the same error, sample:
C:\Users\me>kubectl get pods --context=minikube
Unable to connect to the server: dial tcp 192.168.99.100:8443: connectex: A connection attempt failed because the connected party
did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
I execute minikube ip and I ping the result IP and I get a response. Also I tried to give more memory (3Gb vs the standard 2Gb) and nothing changed.
Am I doing something wrong here?
Thanks!
I had same issue as above. I found out that kubectl couldn't connect to the cluster and would throw up the error when i'm on a VPN connection. When I turned off my VPN client, it started working as fine.
I think it could be some problem with the cluster, when I run minikube status I've got the mixed results of cluster running and cluster stopped:
First run:
c:\> minikube status
minikube: Running
cluster: Stopped
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100
Second run:
minikube: Running
cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100
Third run:
minikube: Running
cluster: Stopped
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100
The service is flapping.
UPDATED:
Connecting to the minikube vm using minikube ssh I realized the kubeconfig file have wrong path separator for certificates generated by minikube automatic configuration. The path on kubeconfig file stands for \var\lib\localkube\certs\ca.cert and it have to be /var/lib/localkube/certs/ca.cert and so on...
To update the file I have to copy the content of the orignal file to my desktop, fix the directory separators and save the correct file to /var/lib/localkube/kubeconfig and restart the service using:
sudo systemclt restart localkube.
I hope everyone can use minikube with this tip.
If it keep to hit 8443 connection issue when changed work environment, would simplify turn off TLS verification for minikube local cluster if there is not clue.
https://github.com/robertluwang/docker-hands-on-guide/blob/master/minikube-no-tls-verify.md
Hope it is helpful for you.
BR/
Robert
from the documentation:
for Troubleshooting
Run minikube start --alsologtostderr -v=7 to debug crashes
I had the same problem:
check if a some service of a VPN is running by checking the task management, for me, I had a running service of my VPN, so kill the task and try to run the command showed above