Failed to create secret Post "http://localhost:8080/api/v1/namespaces/keycloak-test/secrets?fieldManager=kubectl-create&fieldValidation=Strict" - kubernetes

I am new to Kubernetes. I just trying to create a tls secret using kubectl. My ultimate goal is deploy a keycloak cluster in kubernetes.
So I follow this youtube tutorial. But in this tutorial doesn't mention how to generate my own tls key and tls cert. So to do that I use this documentation (https://www.linode.com/docs/guides/create-a-self-signed-tls-certificate/).
Then I could generate MyCertTLS.crt and MyKeyTLS.key
gayan#Gayan:/srv$ cd certs
gayan#Gayan:/srv/certs$ ls
MyCertTLS.crt MyKeyTLS.key
To create secret key for the kubernetes, I ran this command
sudo kubectl create secret tls my-tls --key="MyKeyTLS.key" --cert="MyCertTLS.crt" -n keycloak-test
But It's not working, I got this error,
gayan#Gayan:/srv/certs$ sudo kubectl create secret tls my-tls --key="MyKeyTLS.key" --cert="MyCertTLS.crt" -n keycloak-test
[sudo] password for gayan:
error: failed to create secret Post "http://localhost:8080/api/v1/namespaces/keycloak-test/secrets?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 127.0.0.1:8080: connect: connection refused
Note:
MiniKube is Running...
And Ingress Addon also enabled...
I have created a namespace called keycloak-test.
gayan#Gayan:/srv/keycloak$ kubectl get namespaces
NAME STATUS AGE
default Active 3d19h
ingress-nginx Active 119m
keycloak-test Active 4m12s
kube-node-lease Active 3d19h
kube-public Active 3d19h
kube-system Active 3d19h
kubernetes-dashboard Active 3d19h
I am trying to fix this error. But I have no idea why I get this, looking for a solution from the genius community.

I figured this out! I posted this, because this may helpful for someone.
I am getting that error,
error: failed to create secret Post "http://localhost:8080/api/v1/namespaces/keycloak-test/secrets?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 127.0.0.1:8080: connect: connection refused
Because my kubernetes api-server is running on a different port.
You can view what port your kubernetes api-server is running by running this command,
kubectl config view
Then for example, if you can see server: localhost:40475 like that, It's mean your server running on port 40475.
And kubernetes default port is 8443
Then you should mention the correct port on your kubectl command to create the secret.
So, I add --server=https://localhost:40475 to my command.
kubectl create secret tls my-tls --key="tls.key" --cert="tls.crt" -n keycloak-test --server=https://localhost:40475
And another thing, if you getting error like permission denied
You have to change the ownership of your tls.key file and tls.crt file.
I did this by running these commands,
sudo chmod 666 tls.crt
sudo chmod 666 tls.key
Then you should run above kubectl command, without sudo! It works !!!!!!
If you run that command with sudo, It will ask username and passwords and it confused me and it did not work.
So, by doing this way, I solved this issue!
Hope this will help to someone!!! Thanks!

In your examples, kubectl get namespaces works, but sudo kubectl create secret doesn't.
You don't need sudo to work with Kubernetes. In particular, the connection information is stored in a $HOME/.kube/config file by default, but when you sudo kubectl ..., that changes the home directory and you can't find the connection information.
The standard Kubernetes assumption is that the cluster is remote, and so your local user ID doesn't really matter to it. All that does matter is the Kubernetes-specific permissions assigned to the user that's accessing the cluster.

Related

Cannot access the proxy of a kubernetes pod

I created a kubernetes cluster on my debian 9 machine using kind.
Which apparently works because I can run kubectl cluster-info with valid output.
Now I wanted to fool around with the tutorial on Learn Kubernetes Basics site.
I have already deployed the app
kubectl create deployment kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1
and started the kubectl proxy.
Output of kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
kubernetes-bootcamp 1/1 1 1 17m
My problem now is: when I try to see the output of the application using curl I get
Error trying to reach service: 'dial tcp 10.244.0.5:80: connect: connection refused'
My commands
export POD_NAME=$(kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')
curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME/proxy/
For the sake of completeness I can run curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME/ and I get valid output.
The steps from this tutorial module represent environment as if You were working on one of the cluster nodes.
And the command tries to check connectivity to service locally on the node.
However In Your case by running Your kubernetes in a docker (kind) cluster the curl command is most likely ran from the host that is serving the docker containers that have kubernetes in it.
It might be possible to use docker exec to get inside kind node and try to run curl command from there.
Hope this helps.
I'm also doing following the tutorial using kind and got it to work forwarding the port:
kubectl port-forward $POD_NAME 8001:8001
Try add :8080 after the $POD_NAME
curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME:8080/proxy/

Kubernetes dashboard

I have been able to successfully setup kubernetes on my Centos 7 server.
On trying to get the dashboard working after following the documentation, running 'kubectl proxy' it
attempts to run using 127.0.0.1:9001 and not my server ip. Do this mean I cannot access kubernetes dashboard outside the server?
I need help on getting the dashboard running using my public ip
You can specify on which address you want to run kubectl proxy, i.e.
kubectl proxy --address <EXTERNAL-IP> -p 9001
Starting to serve on 100.105.***.***:9001
You can also use port forwarding to access the dashboard.
kubectl port-forward --address 0.0.0.0 pod/dashboard 8888:80
This will listen port 8888 on all addresses and route traffic directly to your pod.
For instance:
rsha:~$ kubectl port-forward --address 0.0.0.0 deploy/webserver 8888:80
Forwarding from 0.0.0.0:8888 -> 80
In another terminal running
rsha:~$ curl 100.105.***.***:8888
<html><body><h1>It works!</h1></body></html>
As I understand, you would like to access the dashboard from your laptop. What you should do is create an admin account called k8s-admin:
$ kubectl --namespace kube-system create serviceaccount k8s-admin
$ kubectl create clusterrolebinding k8s-admin --serviceaccount=kube-system:k8s-admin --clusterrole=cluster-admin
Then setup kubectl on your laptop, e.g. for macOS it looks like this (see documentation):
$ brew install kubernetes-cli
Setup a proxy to your workstation. Create a ~/.kube directory on your laptop and then scp the ~/.kube/config file from the k8s (Kubernetes) master to your ~/.kube directory.
Then get the authentication token you need to connect to the dashboard:
$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep k8s-admin | awk '{print $1}')
Now start the proxy:
$ kubectl proxy
Now open the dashboard by going to:
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
You should see the Token option and then copy-paste the token from the prior step and Sign-In.
You can follow this tutorial.

Access Kubernetes API with kubectl failed after enabling RBAC

I'm trying to enable RBAC on my cluster and iadded those following line to the kube-apiserver.yml :
- --authorization-mode=RBAC
- --runtime-config=rbac.authorization.k8s.io/v1beta1
- --authorization-rbac-super-user=admin
and i did systemctl restart kubelet ;
the apiserver starts successfully but i'm not able to run kubectl command and i got this error :
kubectl get po
Error from server (Forbidden): pods is forbidden: User "kubectl" cannot list pods in the namespace "default"
where am I going wrong or i should create some roles to the kubectl user ? if so how that possible
Error from server (Forbidden): pods is forbidden: User "kubectl" cannot list pods in the namespace "default"
You are using user kubectl to access cluster by kubectl utility, but you set --authorization-rbac-super-user=admin, which means your super-user is admin.
To fix the issue, launch kube-apiserver with superuser "kubectl" instead of "admin."
Just update the value of the option: --authorization-rbac-super-user=kubectl.
Old question but for google searchers, you can use the insecure port:
If your API server runs with the insecure port enabled (--insecure-port), you can also make API calls via that port, which does not enforce authentication or authorization.
Source: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#privilege-escalation-prevention-and-bootstrapping
So add --insecure-port=8080 to your kube-apiserver options and then restart it.
Then run:
kubectl create clusterrolebinding kubectl-cluster-admin-binding --clusterrole=cluster-admin --user=kubectl
Then turn the insecure-port off.

Why tiller connect to localhost 8080 for kubernetes api?

When use helm for kubernetes package management, after installed the helm client,
after
helm init
I can see tiller pods are running on kubernetes cluster, and then when I run helm ls, it gives an error:
Error: Get http://localhost:8080/api/v1/namespaces/kube-system/configmaps?labe
lSelector=OWNER%3DTILLER: dial tcp 127.0.0.1:8080: getsockopt: connection
refused
and use kubectl logs I can see similar message like:
[storage/driver] 2017/08/28 08:08:48 list: failed to list: Get
http://localhost:8080/api/v1/namespaces/kube-system/configmaps?
labelSelector=OWNER%3DTILLER: dial tcp 127.0.0.1:8080: getsockopt: connection
refused
I can see the tiller pod is running at one of the node instead of master, there is no api server running on that node, why it connects to 127.0.0.1 instead of my master ip?
Run this before doing helm init. It worked for me.
kubectl config view --raw > ~/.kube/config
First delete tiller deployment and stop the tiller service.By running below commands,
kubectl delete deployment tiller-deploy --namespace=kube-system
kubectl delete service tiller-deploy --namespace=kube-system
rm -rf $HOME/.helm/
By default, helm init installs the Tiller pod into the kube-system namespace, with Tiller configured to use the default service account.
Configure Tiller with cluster-admin access with the following command:
kubectl create clusterrolebinding tiller-cluster-admin \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:default
Then install helm server (Tiller) with the following command:
helm init
So I was having this problem since a couple weeks on my work station and none of the answers provided (here or in Github) worked for me.
What it has worked is this:
sudo kubectl proxy --kubeconfig ~/.kube/config --port 80
Notice that I am using port 80, so I needed to use sudo to be able to bing the proxy there, but if you are using 8080 you won't need that.
Be careful with this because the kubeconfig file that the command above is pointing to is in /root/.kube/config instead than in your usual $HOME. You can either use an absolute path to point to the config you want to use or create one in root's home (or use this sudo flag to preserve your original HOME env var --preserve-env=HOME).
Now if you are using helm by itself I guess this is it. To get my setup working, as I am using Helm through the Terraform provider on GKE this was a pain in the ass to debug as the message I was getting doesn't even mention Helm and it's returned by Terraform when planning. For anybody that may be in a similar situation:
The errors when doing a plan/apply operation in Terraform in any cluster with Helm releases in the state:
Error: error installing: Post "http://localhost/apis/apps/v1/namespaces/kube-system/deployments": dial tcp [::1]:80: connect: connection refused
Error: Get "http://localhost/api/v1/namespaces/system/secrets/apigee-secrets": dial tcp [::1]:80: connect: connection refused
One of these errors for every helm release in the cluster or something like that. In this case for a GKE cluster I had to ensure that I had the env var GOOGLE_APPLICATION_CREDENTIALS pointing to the key file with valid credentials (application-default unless you are not using the default set up for application auth) :
gcloud auth application-default login
export GOOGLE_APPLICATION_CREDENTIALS=/home/$USER/.config/gcloud/application_default_credentials.json
With the kube proxy in place and the correct credentials I am able again to use Terraform (and Helm) as usual. I hope this is helpful for anybody experiencing this.
kubectl config view --raw > ~/.kube/config
export KUBECONFIG=~/.kube/config
worked for me

Kubernetes dashboard cannot be started

I create dashboard after I installed kubernetes with kubeadm.
kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml
Wait a while, the pod is crashed like:
kubectl get pods --all-namespaces
kubernetes-dashboard-3203831700-wq0v4 0/1 CrashLoopBackOff 3 3m
And I checked the pod log:
kubectl logs -f kubernetes-dashboard-3203831700-wq0v4 -n kube-system Using HTTP port: 9090
Creating API server client for https://10.96.0.1:443
Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
Refer to the troubleshooting guide for more information: https://github.com/kubernetes/dashboard/blob/master/docs/user-guide/troubleshooting.md
But I tried it mannually, the url works:
# curl https://10.96.0.1:443/version
curl: (35) Peer reports incompatible or unsupported protocol version.
Have anybody encountered this issue before? or help me?
I execute the following command:
rm -rf ~/.kube
Now it works. still a bit strange :-(