Is it possible to disable kubernetes dashboard tls check - kubernetes

I am login kubernetes dashboard in my local machine(http://kubernetes.dolphin.com:8443/#/login), and I define a virutal domain name in /etc/hosts:
192.168.31.30 kubernetes.dolphin.com
and now I am login kubernetes dashboard uing this domain, but it give me tips:
Insecure access detected. Sign in will not be available. Access Dashboard securely over HTTPS or using localhost.
is it possbile to close kubernetes dashboard(kubernetesui/dashboard:v2.0.3) tls security check in kubernetes dashboard yaml? Because my kubernetes in localhost machine and do not need TLS security.Now my login dashboard look like this.

enable kubernetes dahboard http access:
containers:
- name: kubernetes-dashboard
image: 'kubernetesui/dashboard:v2.0.3'
args:
- '--namespace=default'
- '--insecure-port=5443'
so you could using 5443 port to forward kubernetes dashboard access data, and do not need to login. But you should not do like this in production environment.

Related

Keycloak in EKS with CloudFront

I have configured KeyCloak in EKS and the application is exposed using CloudFront when i browse the application and enter the credentials the credentials are sent using HTTP instead of HTTPS and here is my configuration on cloudfront
Configuration in nginx ingress controller
After the credentials are entered in the keycloak login page it is trying to send as http
Require SSL is kept as none in the realm and keycloak is running with PROXY_ADDRESS_FORWARDING as true and in the nginx ingress i have other services running on http so the keycloak can't be kept on https
Can some one please suggest how this can be solved

Connect gitlab to kubernetes cluster hosted in rancher

I am trying to connect to a rancher2 kubernetes cluster from gitlab. My kube config looks like
apiVersion: v1
kind: Config
clusters:
- name: "k8s"
cluster:
server: "https://..."
- name: "k8s-desktop"
cluster:
server: "https://192.168.0.2:6443"
certificate-authority-data: ...
I need to point gitlab to the name.cluster.server value being https://192.168.0.2:6443, this is an internal IP. How can I override this value in kube config using my external IP so gitlab is able to connect?
When you log into Rancher, you can get the kubeconfig file. This will use the Rancher url on port 443. Your kubeconfig seems to be pointing directly to your k8s node, as the kubeconfig you obtain when using RKE.
If by external ip, you mean connect from outside, then you need a device capable of port forwarding. Please clarify what you mean by internal / external ip.
From my side, i have no problem to give gitlab the Rancher url in order to connect to k8s. Rancher will proxy the connection to the k8s cluster.
I dont see any reasons to change you server ip to External.
What you should do is create port forwarding from internal https://192.168.0.2:6443 to your external ip. And then use External url with port forwarded port in Gitlab Kubernetes API URL.

Grafana throws HTTP Error Bad Gateway for Prometheus data source

I have Grafana and Prometheus set up on my k8s cluster. Both were installed thru helm using https://github.com/helm/charts/tree/master/stable.
Both Grafana and Prometheus are set up thru k8s nginx ingress with my domian addresses.
When I try to set up the Prometheus as a data source in Grafana I get HTTP Error Bad Gateway. In the chrome console in Grafana page I see:
http://grafana.domain.com/api/datasources/proxy/1/api/v1/query?query=1%2B1&time=1554043210.447
Grafana version: Grafana v6.0.0 (commit: 34a9a62)
Grafana data source settings for Prometheus:
URL: https://prometheus.mydomain.com:9090
Access: Server(Default)
Auth:
Basic & TLS Client Auth
What might be wrong and how to debug/fix it?
In Grafana data source settings for prometheus database add prometheus service dns and service port. Like below
<prometheus service name>. Namespace. Svc. Cluster. Local:9090
if you are run Grafana and Prometheus on docker on your local machine and this will do for the datasource settings
Add host as {host.docker.internal} : {port}
example - http://{host.docker.internal}:9090
When you are trying to add a data source, like Prometheus it is a little bit confusing because they are asking for the http, but you have to put your IP adress.
You just have to open CMD and write ipconfig /all and then look at the IPv4 Direction and then you will have your IP.
So the last part is going to Prometheus and in the URL you have to put:
http://{your_IP}:9090

Kubernetes Dashboard does not accept service account's token over HTTP: Authentication failed. Please try again

I have installed Kubernetes Dashboard on a Kubernetes 1.13 cluster as described here:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
I've also configured the dashboard to serve its content insecurely because I want to expose it via an Ingress at https://my-cluster/dashboard where the TLS connection will be terminated at the ingress controller.
I have edited service/kubernetes-dashboard in namespace kube-system and changed ports from {port:443, targetPort:8443} to {port:80, protocol:TCP, targetPort:9090}.
I have edited deployment/kubernetes-dashboard in namespace kube-system and changed
ports from {containerPort: 8443, protocol:TCP} to {containerPort: 9090, protocol:TCP} (and livenessProbe analagously). I have also changed args from [ --auto-generate-certificates ] to [ --enable-insecure-login ].
This allows me to contact the dashboard from a cluster node via HTTP at the service's cluster IP address and port 80 (no Ingress is configured yet).
I have also created a sample user as explained here and extracted its token. The token works e.g. in kubectl --token $token get pod --all-namespaces, so it apparently possesses cluster-admin privileges. However, if I enter the same token into the dashboards' login screen I get "Authentication failed. Please try again.".
What could be the reason why? How can I further diagnose and solve the issue? (The dashboard's log does not provide any help at this point.)
UPDATE If I keep the dashboard's standard configuration (i.e. for secure access over HTTPS) the same token is accepted.

Accessing Kubernetes API via Kubernetes Dashboard Host

So the idea is Kubernetes dashboard accesses Kubernetes API to give us beautiful visualizations of different 'kinds' running in the Kubernetes cluster and the method by which we access the Kubernetes dashboard is by the proxy mechanism of the Kubernetes API which can then be exposed to a public host for public access.
My question would be is there any possibility that we can access Kubernetes API proxy mechanism for some other service inside a Kubernetes cluster via that publically exposed address of Kubernetes Dashboard?
Sure you can. So after you set up your proxy with kubectl proxy, you can access the services with this format:
http://localhost:8001/api/v1/namespaces/kube-system/services/<service-name>:<port-name>/proxy/
For example for http-svc and port name http:
http://localhost:8001/api/v1/namespaces/default/services/http-svc:http/proxy/
Note: it's not necessarily for public access, but rather a proxy for you to connect from your public machine (say your laptop) to a private Kubernetes cluster.
You can do it by changing your service to NodePort:
$ kubectl -n kube-system edit service kubernetes-dashboard
You should see yaml representation of the service. Change type: ClusterIP to type: NodePort and save file.
Note: This way of accessing Dashboard is only possible if you choose to install your user certificates in the browser. Certificates used by kubeconfig file to contact API Server can be used.
Please check the following articles and URLs for better understanding:
Stackoverflow thread
Accessing Dashboard 1.7.X and above
Deploying a publicly accessible Kubernetes Dashboard
How to access kubernetes dashboard from outside cluster
Hope it will help you!
Exposing Kubernetes Dashboard not secure at all , but your answer is about K8s API Server that need to be accessible by external services.
The right answer differs according your platform and infrastructure , but as general points
[Network Security] Limit IP public reachability to K8s API Servers(s) / Load balancer if exist as a white list mechanism
[Network Security] Private-to-Private reachability is better like vpn or AWS PrivateLink
[ API Security ] Limit Privileges by clusterrole/role to enforce RBAC , better to keep it ReadOnly verbs { Get , List }
[ API Security ] enable audit logging for k8s components to keep track of events and actions