Enable SSL connection for Kubernetes Dashboard - kubernetes

I use this command to install and enable Kubernetes dashboard on a remote host:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.1/aio/deploy/recommended.yaml
kubectl proxy --address='192.168.1.132' --port=8001 --accept-hosts='^*$'
http://192.168.1.132:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login
But I get:
Insecure access detected. Sign in will not be available. Access Dashboard securely over HTTPS or using localhost. Read more here .
Is it possible to enable SSL connection on the Kubernetes host so that I can access it without this warning message and enable login?

From the service definition
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
Which exposes port 443 (aka https). So it's already preconfigured. First, use https instead of http in your URL.
Then, instead of doing a kubectl proxy, why not simply
kubectl port-forward -n kubernetes-dashboard services/kubernetes-dashboard 8001:443
Access endpoint via https://127.0.0.1:8001/#/login
Now it's going to give the typical "certificate not signed" since the certificate are self signed (arg --auto-generate-certificates in deployment definition). Just skip it with your browser. See an article like https://vmwire.com/2022/02/07/running-kubernetes-dashboard-with-signed-certificates/ if you need to configure a signed certificate.

Try this
First do a port forward to your computer, this will forward 8443 of your computer (first port) to the 8443 port in the pod (the one that is exposed acording to the manifest)
kubectl port-forward pod/kubernetes-dashboard 8443:8443 # Make sure you switched to the proper namespace
In your browser go to http://localhost:8443 based on the error message it should work.
if the kubernetes dashboard pod implements ssl in its web server then go to https://localhost:8443

Related

Control the pod domain

I have pod1 and pod2 in the same namespace.
pod2 is running an HTTP server.
How can I easily get pod2 be seen as pod2.mydomain.com from pod1?
In this way the HTTPS certificate would work with no problem.
It can be achieved through the kubernetes cluster and it is important to use a valid SSL certificate.
By using a Kubernetes service you can create a service of type “ClusterIP” or “NodePort” in the same namespace as the pods and you need to expose the pod2 HTTP server to a consistent IP and port. In this way you can configure your DNS to map pod2.mydomain.com to the IP address of the service.
Or if your cluster supports load balancing you can create a Service of type “LoadBalancer” and you can expose pod2 HTTP server to a public IP.
You can also create an Ingress service that routes traffic to pod 2 on the hostname.
For more information please check this official Document
You can directly hit the POD with it's IP fine how you are thinking however it would better to use the service with POD.
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app.kubernetes.io/name: proxy
spec:
containers:
- name: nginx
image: nginx:stable
ports:
- containerPort: 80
name: http-web-svc
---
apiVersion: v1
kind: Service
metadata:
name: service-1
spec:
selector:
app.kubernetes.io/name: proxy
ports:
- name: name-of-service-port
protocol: TCP
port: 80
targetPort: http-web-svc
Service route the traffic to matching labels PODs, so from POD1 you hit the request to service-1 which will forward traffic to POD-1 and response.
https://service-1.<namespace-name>.svc.cluster.local
with service HTTPS will also work the way you asked. Test with the curl command, start one curl pod
kubectl run mycurlpod --image=curlimages/curl -i --tty -- sh
hit curl request to service
curl https://service-1.<namespace-name>.svc.cluster.local
service ref doc : https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service
Extra :
You can also use the ingress & service mesh which will make little simple for you scenario if you don't want to manage SSL/TLS cert for the app.
Service mesh supports the mTLS auth you can force policy and it would be easy however there will be extra management.

Kubernetes API Gateway for Microservice deployment

I am trying to understand the Kubernetes API Gateway for my Microservices. I have multiple microservices and those are deployed with the Kubernetes deployment type along with its own services.
I also have a front-end application that basically tries to communicate with the above APIs to complete the requests.
Overall, below is something I like to achieve and I like your opinions.
Is my understanding correct with the below diagram? (like Should we have API Gateway on top of all my Microservices and Web Application should use this API Gateway to reach any of those services?
If yes, How can I make that possible? I mean, I tried ISTIO Gateway and that's somehow not working.
Here is istio gateway and virtual service
On another side, below is my service (catalog service) configuration
apiVersin: v1
kind: Service
metadata:
name: catalog-api-service
namespace: local-shoppingcart-v1
labels:
version: "1.0.0"
spec:
type: NodePort
selector:
app: catalog-api
ports:
- nodePort: 30001
port: 30001
targetPort: http
protocol: TCP
name: http-catalogapi
also, at the host file (windows - driver\etc\host file) I have entries for the local DNS
127.0.0.1 kubernetes.docker.internal
127.0.0.1 localshoppingcart.com
istio service side, following screenshot
I am not sure what is going wrong but I try localhost:30139/catalog or localhost/catalog it always gives me connection refuse or connection not found error only.
If you are on the minikube you have to get the IP of minikube and port using these command as mentioned in the document
Get IP of minikube
export INGRESS_HOST=$(minikube ip)
Document
You can get Port and HTTPS port details
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(#.name=="http2")].nodePort}')
export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(#.name=="https")].nodePort}')
If you are on Docker Desktop try forwarding traffic using the kubectl
kubectl port-forward svc/istio-ingressgateway 8080:80 -n istio-system
Open
localhost:8080 in browser
Read more

Treafik Let's encrypt simplest example on GKE

I am trying to work on simplest possible example of implementing let's encrypt with Traefik on GKE using this article. I have made some changes to suit my requirement but I am unable to get the ACME certificate.
What I have done so far
Run the following command and create all the resource objects except ingress-route
$ kubectl apply -f 00-resource-crd-definition.yml,05-traefik-rbac.yml,10-service-account.yaml,15-traefik-deployment.yaml,20-traefik-service.yaml,25-whoami-deployment.yaml,30-whoami-service.yaml
customresourcedefinition.apiextensions.k8s.io/ingressroutes.traefik.containo.us created
customresourcedefinition.apiextensions.k8s.io/middlewares.traefik.containo.us created
customresourcedefinition.apiextensions.k8s.io/ingressroutetcps.traefik.containo.us created
customresourcedefinition.apiextensions.k8s.io/tlsoptions.traefik.containo.us created
customresourcedefinition.apiextensions.k8s.io/traefikservices.traefik.containo.us created
clusterrole.rbac.authorization.k8s.io/traefik-ingress-controller created
clusterrolebinding.rbac.authorization.k8s.io/traefik-ingress-controller created
serviceaccount/traefik-ingress-controller created
deployment.apps/traefik created
service/traefik created
deployment.apps/whoami created
service/whoami created
Get the IP of the Traefik Service exposed as Load Balancer
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.109.0.1 <none> 443/TCP 6h16m
traefik LoadBalancer 10.109.15.230 34.69.16.102 80:32318/TCP,443:32634/TCP,8080:32741/TCP 70s
whoami ClusterIP 10.109.14.91 <none> 80/TCP 70s
Create a DNS record for this IP
$ nslookup k8sacmetest.gotdns.ch
Server: 192.168.1.1
Address: 192.168.1.1#53
Non-authoritative answer:
Name: k8sacmetest.gotdns.ch
Address: 34.69.16.102
Create the resource ingress-route
$ kubectl apply -f 35-ingress-route.yaml
ingressroute.traefik.containo.us/simpleingressroute created
ingressroute.traefik.containo.us/ingressroutetls created
Logs of traefik
time="2020-04-25T20:10:31Z" level=info msg="Configuration loaded from flags."
time="2020-04-25T20:10:32Z" level=error msg="subset not found for default/whoami" providerName=kubernetescrd ingress=simpleingressroute namespace=default
time="2020-04-25T20:10:32Z" level=error msg="subset not found for default/whoami" providerName=kubernetescrd ingress=ingressroutetls namespace=default
time="2020-04-25T20:10:52Z" level=error msg="Unable to obtain ACME certificate for domains \"k8sacmetest.gotdns.ch\": unable to generate a certificate for the domains [k8sacmetest.gotdns.ch]: acme: Error -> One or more domains had a problem:\n[k8sacmetest.gotdns.ch] acme: error: 400 :: urn:ietf:params:acme:error:connection :: Timeout during connect (likely firewall problem), url: \n" routerName=default-ingressroutetls-08dd2bb9eecaa72a6606#kubernetescrd rule="Host(`k8sacmetest.gotdns.ch`) && PathPrefix(`/tls`)" providerName=default.acme
What i have acheived
Traefik Dashboard
link
Whoami with notls
link
NOT ABLE TO GET THE ACME CERTIFICATE USING FOR TLS WHOAMI
my-pain
INFRA Details
I am using Google Kubernetes Cluster (the one being talked about here -cloud.google.com/kubernetes-engine, click on Go to Console).
Traefik version is 2.2.
And I am using "CloudShell" to access the cluster".
ASK:
1) Where am i going wrong to get the TLS Certificate?
2) If its firewall issue how to resolve?
3) If you have any other better example for Treafik Let's encrypt simplest example on GKE please let me know
Just run sudo before kubectl port-forward command. You are trying to bind to privileged ports, so you need more permissions.
It is not the simplest example for GKE, because you could use GKE LoadBalnacer instead of kubectl port-forward.
Try with this:
apiVersion: v1
kind: Service
metadata:
name: traefik
spec:
ports:
- protocol: TCP
name: web
port: 80
targetPort: web
- protocol: TCP
name: websecure
port: 443
targetPort: websecure
selector:
app: traefik
type: LoadBalancer
Then you can find your new IP with kubectl get svc in EXTERNAL-IP column, add proper DNS record for your domain and you should be fine.

Google Kubernetes Ingress health check always failing

I have configured a web application pod exposed via apache on port 80. I'm unable to configure a service + ingress for accessing from the internet. The issue is that the backend services always report as UNHEALTHY.
Pod Config:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
name: webapp
name: webapp
namespace: my-app
spec:
replicas: 1
selector:
matchLabels:
name: webapp
template:
metadata:
labels:
name: webapp
spec:
containers:
- image: asia.gcr.io/my-app/my-app:latest
name: webapp
ports:
- containerPort: 80
name: http-server
Service Config:
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
type: NodePort
selector:
name: webapp
ports:
- protocol: TCP
port: 50000
targetPort: 80
Ingress Config:
kind: Ingress
metadata:
name: webapp-ingress
spec:
backend:
serviceName: webapp-service
servicePort: 50000
This results in backend services reporting as UNHEALTHY.
The health check settings:
Path: /
Protocol: HTTP
Port: 32463
Proxy protocol: NONE
Additional information: I've tried a different approach of exposing the deployment as a load balancer with external IP and that works perfectly. When trying to use a NodePort + Ingress, this issue persists.
With GKE, the health check on the Load balancer is created automatically when you create the ingress. Since the HC is created automatically, so are the firewall rules.
Since you have no readinessProbe configured, the LB has a default HC created (the one you listed). To debug this properly, you need to isolate where the point of failure is.
First, make sure your pod is serving traffic properly;
kubectl exec [pod_name] -- wget localhost:80
If the application has curl built in, you can use that instead of wget.
If the application has neither wget or curl, skip to the next step.
get the following output and keep track of the output:
kubectl get po -l name=webapp -o wide
kubectl get svc webapp-service
You need to keep the service and pod clusterIPs
SSH to a node in your cluster and run sudo toolbox bash
Install curl:
apt-get install curl`
Test the pods to make sure they are serving traffic within the cluster:
curl -I [pod_clusterIP]:80
This needs to return a 200 response
Test the service:
curl -I [service_clusterIP]:80
If the pod is not returning a 200 response, the container is either not working correctly or the port is not open on the pod.
if the pod is working but the service is not, there is an issue with the routes in your iptables which is managed by kube-proxy and would be an issue with the cluster.
Finally, if both the pod and the service are working, there is an issue with the Load balancer health checks and also an issue that Google needs to investigate.
As Patrick mentioned, the checks will be created automatically by GCP.
By default, GKE will use readinessProbe.httpGet.path for the health check.
But if there is no readinessProbe configured, then it will just use the root path /, which must return an HTTP 200 (OK) response (and that's not always the case, for example, if the app redirects to another path, then the GCP health check will fail).

Kubernetes service is reachable from node but not from my machine

I have a timeout problem with my site hosted on Kubernetes cluster provided by DigitalOcean.
u#macbook$ curl -L fork.example.com
curl: (7) Failed to connect to fork.example.com port 80: Operation timed out
I have tried everything listed on the Debug Services page. I use a k8s service named df-stats-site.
u#pod$ nslookup df-stats-site
Server: 10.245.0.10
Address: 10.245.0.10#53
Name: df-stats-site.deepfork.svc.cluster.local
Address: 10.245.16.96
It gives the same output when I do it from node:
u#node$ nslookup df-stats-site.deepfork.svc.cluster.local 10.245.0.10
Server: 10.245.0.10
Address: 10.245.0.10#53
Name: df-stats-site.deepfork.svc.cluster.local
Address: 10.245.16.96
With the help of Does the Service work by IP? part of the page, I tried the following command and got the expected output.
u#node$ curl 10.245.16.96
*correct response*
Which should mean that everything is fine with DNS and service. I confirmed that kube-proxy is running with the following command:
u#node$ ps auxw | grep kube-proxy
root 4194 0.4 0.1 101864 17696 ? Sl Jul04 13:56 /hyperkube proxy --config=...
But I have something wrong with iptables rules:
u#node$ iptables-save | grep df-stats-site
(unfortunately, I was not able to copy the output from node, see the screenshot below)
It is recommended to restart kube-proxy with with the -v flag set to 4, but I don't know how to do it with DigitalOcean provided cluster.
That's the configuration I use:
apiVersion: v1
kind: Service
metadata:
name: df-stats-site
spec:
ports:
- port: 80
targetPort: 8002
selector:
app: df-stats-site
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: df-stats-site
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- fork.example.com
secretName: letsencrypt-prod
rules:
- host: fork.example.com
http:
paths:
- backend:
serviceName: df-stats-site
servicePort: 80
Also, I have a NGINX Ingress Controller set up with the help of this answer.
I must note that it worked fine before. I'm not sure what caused this, but restarting the cluster would be great, though I don't know how to do it without removing all the resources.
The solution for me was to add HTTP and HTTPS inbound rules in the Firewall (these are missing by default).
For DigitalOcean provided Kubernetes cluster, you can open it at https://cloud.digitalocean.com/networking/firewalls/.
UPDATE: Make sure to create a new firewall record rather than editing an existing one. Otherwise, your rules will be automatically removed in a couple of hours/days, because DigitalOcean k8s persists the set of rules in the firewall.
ClusterIP services are only accessible from within the cluster. If you want to access it from outside the cluster, it needs to be configured as NodePort or LoadBalancer.
If you are just trying to test something locally, you can use kubectl port-forward to forward a port on your local machine to a ClusterIP service on a remote cluster. Here's an example of creating a deployment from an image, exposing it as a ClusterIP service, then accessing it via kubectl port-forward:
$ kubectl run --image=rancher/hello-world hello-world --replicas 2
$ kubectl expose deployment hello-world --type=ClusterIP --port=8080 --target-port=80
$ kubectl port-forward svc/hello-world 8080:8080
This service is now accessible from my local computer at http://127.0.0.1:8080