Hi i have a server configured with kubernetes (without using minikube), i can execute kubectl commands without problems, like kubectl get all, kubectl delete pod, kubectl delete apply ...
I would want to know how to allow another user from my server to execute kubectl commands, because if i change to another user and i try to execute kubectl get all -s localhost:8443 i get:
Error from server (BadRequest): the server rejected our request for an unknown reason
I have read the Kubernetes Authorization Documentation, but im not sure if it is what im looking for.
This is happening because there is no kubeconfig file for the user.You need to have the same kubeconfig file for the other user either in the default location $HOME/.kube/config or in any location pointed by KUBECONFIG environment variable.
You can copy the existing kubeconfig file for the working user to the above location for the non working user.
Related
I am facing permission denied errors when using kubectl for all commands, be get pods or apply, but I am able to use helm and login with k9s to perform destructive actions. I am using the same context for all of these actions.
kubectl get nodes
# error: You must be logged in to the server (Unauthorized)
kubectl apply -f some-manifest.yaml
# error: You must be logged in to the server (the server has asked for the client to provide credentials)
Does anyone have a hint as to why this is happening or what to look further into? I am using a managed k8s on Vultr, a smaller cloud provider.
Don't know what specifically the issue was but I rebuilt my .kube/config file slowly with all my contexts and it ended up working again.
Very strange though that helm worked and kubectl didn't though...
I am pretty sure that this is a "kubernetes context" problem
Check the solution here: helm and kubectl context mismatch
Solution for k9s can be found here: https://k9scli.io/topics/commands/
I am using a WordPress site build with Kubernetes deployment. In the deployment I have only one container pod. Could you please clarify the possible reasons on the logs are not getting via kubectl logs command. I have tried docker logs on the container also, which is also showing the same output.
#~$ kubectl logs -f <pod_name> -n <namespace>
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using xx.xx.xx.xx. Set the 'ServerName' directive globally to suppress this message
This is the only content in logs even after accessing the domain.
# docker logs -f <k8s_container_name>
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using xx.xx.xx.xx. Set the 'ServerName' directive globally to suppress this message
kubectl logs show the default STDOUT and STDERR
You need to write the file_put_contents('php://stderr', 'Your text goes to STDERR',FILE_APPEND);
at least append in file and you read from there or else if you developing php WordPress you can push logs to external logging also.
Read more at : https://www.php.net/manual/en/features.commandline.io-streams.php & https://docs.docker.com/config/containers/logging/
When I am using kubectl get namespace command in my Kubernetes master node, I am getting proper output. And also I configured kubectl in my local machine. When I am running the same command from local machine configured with kubectl, I am getting error like the following,
Error from server (Forbidden): namespaces is forbidden: User "system:node:mildevkub020" cannot list resource "namespaces" in API group "" at the cluster scope
I copied the configuration file kubelet.conf from cluster and copied into .kube/config. And also installed the kubectl. This is the process what did till now.
Result of kubectl config view is like the following,
How can I resolve this issue?
Kubespray by default saves cluster admin kubeconfig file as inventory/mycluster/artifacts/admin.conf. Read more here: https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#accessing-kubernetes-api
My goal is to set appscode guard application.
In order to so i need to set the value of authentication-token-webhook-config-file flag in Kubernetes api server.
How to do that ?
If you are looking for the way to add an option key to kube-apiserver pod on existing cluster, you just need to edit file /etc/kubernetes/manifests/kube-apiserver.yaml on master node.
After saving this file, kube-apiserver pod will be restarted by kubelet service automatically.
Considering that flag you've mentioned has to have name of the configuration file as parameter, ensure the file exists on the master node file system.
--authentication-token-webhook-config-file string
File with webhook configuration for token authentication in kubeconfig format. The API server will query the remote service to determine authentication for bearer tokens.
The directory for the manifests is defined by kubelet option --pod-manifest-path and can be found using command:
$ ps aux | grep kubelet
You can find more information about life cycle of such pods in Kubernetes documentation
I was trying to add a command line flag to the API server. In my setup, it was running as a daemon set inside the k8s cluster so I got the daemon set manifest using kubectl, updated it, and executed kubectl apply -f apiserver.yaml (I know, this was not a good idea).
Of course, the new yaml file I wrote had an error so the API server is not starting anymore and I can't use kubectl to update it. I have an ssh connection to the node where it was running and I can see how the kubelet is trying to run the apiserver pod every few seconds with the ill-formed command. I am trying to configure the kubelet service to use the correct api-server command but am not being able to do so.
Any ideas?
The API server definition usually lives in /etc/kubernetes/manifests - Edit the configuration there rather than at the API level