kubectl Dashboard in docker desktop needs proxy - kubernetes

I am trying to start kubernetes dashboard in docker for desktop and it's working fine. but all time i need to start kubectl proxy and if i close that powershell window then dashboard working stop.
Is there any way to start dashboard without proxy or proxy start all time? how i can access this dashboard in network ?

In order to persistently expose the dashboard you have to add a service to your cluster.
Create a yaml file with the following content (Let's call it dash-serv.yaml):
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-nodeport
namespace: kube-system
spec:
ports:
- port: 80
protocol: TCP
targetPort: 9090
nodePort: 32123
selector:
k8s-app: kubernetes-dashboard
sessionAffinity: None
type: NodePort
then run kubectl apply -f dash-serv.yaml and test your dashboard access on http://localhost:32123.

Related

Pods can't communicate in Kubernetes

I have two pods. One pod (main) has an endpoint that queries other pod(other) to get the result. Both pods have services of type ClusterIP, and the main pod also has an ingress. The main pod is not able to connect to other pod to query at the given endpoint.
In the above image, / endpoint works, but /other endpoint fails.
Below are the config files:
# main-service.yaml
apiVersion: v1
kind: Service
metadata:
name: main-service
labels:
name: main-service-label
spec:
selector:
app: main # label selector of pod, not the deployment
type: ClusterIP
ports:
- port: 8001
protocol: TCP
targetPort: 8001
# other-service.yaml
apiVersion: v1
kind: Service
metadata:
name: other-service
labels:
name: other-service-label
spec:
selector:
app: other # label selector of pod, not the deployment
type: ClusterIP
ports:
- port: 8002
protocol: TCP
targetPort: 8002
All the docker images, deployment files, ingress etc are made available at: this repo.
Note:
I entered the other pod using kubectl exec, and I am able to make curl request to main pod, but not vice versa. Not sure what is going wrong.
All pods, services are in default namespace.

azure AKS internal load balancer not responding requests

I have an AKS cluster, as well as a separate VM. AKS cluster and the VM are in the same VNET (as well as subnet).
I deployed a echo server with the following yaml, I'm able to directly curl the pod with vnet ip from the VM. But when trying that with load balancer, nothing returns. Really not sure what I'm missing. Any help is appreciated.
apiVersion: v1
kind: Service
metadata:
name: echo-server
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: echo-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo-deployment
spec:
replicas: 1
selector:
matchLabels:
app: echo-server
template:
metadata:
labels:
app: echo-server
spec:
containers:
- name: echo-server
image: ealen/echo-server
ports:
- name: http
containerPort: 8080
The following pictures demonstrate the situation
I'm expecting that when curl the vnet ip from load balancer, to receive the same response as I did directly curling the pod ip
Can you check your internal-loadbalancer health probe.
"For Kubernetes 1.24+ the services of type LoadBalancer with appProtocol HTTP/HTTPS will switch to use HTTP/HTTPS as health probe protocol (while before v1.24.0 it uses TCP). And / will be used as the default health probe request path. If your service doesn’t respond 200 for /, please ensure you're setting the service annotation service.beta.kubernetes.io/port_{port}_health-probe_request-path or service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path (applies to all ports) with the correct request path to avoid service breakage."
(ref: https://github.com/Azure/AKS/releases/tag/2022-09-11)
If you are using nginx-ingress controller, try adding the same as mentioned in doc:
(https://learn.microsoft.com/en-us/azure/aks/ingress-basic?tabs=azure-cli#basic-configuration)
helm upgrade ingress-nginx ingress-nginx/ingress-nginx \
--reuse-values \
--namespace <NAMESPACE> \
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz
Have you checked whether the pod's IP is correctly mapped as an endpoint to the service? You can check it using,
k describe svc echo-server -n test | grep Endpoints
If not please check label and selectors with your actual deployment (rather the resources put in the description).
If it is correctly mapped, are you sure that the VM you are using (_#tester) is under the correct subnet which should include the iLB IP;10.240.0.226 as well?
Found the solution, the only thing I need to do is to add the following to the Service declaration:
externalTrafficPolicy: 'Local'
Full yaml as below
apiVersion: v1
kind: Service
metadata:
name: echo-server
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
externalTrafficPolicy: 'Local'
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: echo-server
previously it was set to 'Cluster'.
Just got off with azure support, seems like a specific bug on this (it happens with newer version of the AKS), posting the related link here: https://github.com/kubernetes/ingress-nginx/issues/8501

Unable to access kubernates pods

I follow below steps to deploy my custom jar
1)- I created on docker image through the below docker file
FROM openjdk:8-jre-alpine3.9
LABEL MAINTAINER DINESH
LABEL version="1.0"
LABEL description="First image with Dockerfile & DINESH."
RUN mkdir /app
COPY LoginService-0.0.1-SNAPSHOT.jar /app
WORKDIR /app
CMD ["java", "-jar", "LoginService-0.0.1-SNAPSHOT.jar"]
2)- Deploy on kubernetes with the below deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: load-balancer-example
name: hello-world
spec:
replicas: 2
selector:
matchLabels:
app.kubernetes.io/name: load-balancer-example
template:
metadata:
labels:
app.kubernetes.io/name: load-balancer-example
spec:
containers:
- image: localhost:5000/my-image:latest
name: hello-world
ports:
- containerPort: 8000
3)- Exposed as service with the below command
minikube tunnel
kubectl expose deployment hello-world --type=LoadBalancer --name=my-service --port=8000 --target-port=8000
4)- Output of the kubectl describe svc my-service
<strong>
Name: my-service<br>
Namespace: default<br>
Labels: app.kubernetes.io/name=load-balancer-example<br>
Annotations: <none><br>
Selector: app.kubernetes.io/name=load-balancer-example<br>
Type: LoadBalancer<br>
IP: 10.96.142.93<br>
LoadBalancer Ingress: 10.96.142.93<br>
Port: <unset> 8000/TCP<br>
TargetPort: 8000/TCP<br>
NodePort: <unset> 31284/TCP<br>
Endpoints: 172.18.0.7:8000,172.18.0.8:8000<br>
Session Affinity: None<br>
External Traffic Policy: Cluster<br>
Events: <none><br> </strong>
POD is in running state
I m trying to access the pod using "10.96.142.93" as given in the http://10.96.142.93:8090 my loginservice is started on 8090 PORT, BUT i am unable to access pod plz help
Try to access on nodeport localhost:31284 and please use service type as NodePort instead of LoadBalancer because loadbalancer service type mostly used on cloud level.
and Use Target-Port as same port you configured on pod definition yaml file.
so your url should be http://10.96.142.93:8000
or another way you can be by using port-forward
kubectl port-forward pod_name 80:8000 this will map pod port to localhost port
Then access it on http://localhost:80
The Kubernetes loadbalancer type service is provided only in places such as aws and gcp. Use metallb for use in an on-premises environment.
If not, try modifying the service type to Nodeport and accessing http://localhost:31284

How to access app once deployed via Kubernetes?

I have a very simple Python app that works fine when I execute uvicorn main:app --reload. When I go to http://127.0.0.1:8000 on my machine, I'm able to interact with the API. (My app has no frontend, it is just an API built with FastAPI). However, I am trying to deploy this via Kubernetes, but am not sure how I can access/interact with my API.
Here is my deployment.yaml.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.16.1
ports:
- containerPort: 80
When I enter kubectl describe deployments my-deployment in the terminal, I get back a print out of the deployment, the namespace it is in, the pod template, a list of events, etc. So, I am pretty sure it is properly deployed.
How can I access the application? What would the url be? I have tried a variety of localhost + port combinations to no avail. I am new to kubernetes so I'm trying to understand how this works.
Update:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
namespace: default
spec:
selector:
matchLabels:
app: web
replicas: 2
template:
metadata:
labels:
app: web
spec:
containers:
- name: site
image: nginx:1.16.1
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: app-entrypoint
namespace: default
spec:
type: NodePort
selector:
app: web
ports:
- port: 80
targetPort: 80
nodePort: 30001
Again, when I use the k8s CLI, I'm able to see my deployment, yet when I hit localhost:30001, I get an Unable to connect message.
You have given containerPort: 80 but if your app listens on port 8080 change it to 8080.
There are different ways to access an application deployed on kubernetes
Port Forward using kubectl port-forward deployment/my-deployment 8080:8080
Creare a NodePort service and use http://<NODEIP>:<NODEPORT>
Create a LoadBalanceer service. This works only in supported cloud environment such as AWS, GKE etc.
Use ingress controller such nginx to expose the application.
By Default k8s application are exposed only within the cluster, if you want to access it from outside of the cluster then you can select any of the below options:
Expose Deployment as a node port service (kubectl expose deployment my-deployment --name=my-deployment-service --type=NodePort), describe the service and get the node port assigned to it (kubectl describe svc my-deployment-service). Then try http://<node-IP:node-port>/
For production grade cluster the best practice is to use LoadBalancer type (kubectl expose deployment my-deployment --name=my-deployment-service --type=LoadBalancer --target-port=8080) as part of this service you get an external IP which can be used to access your service http://EXTERNAL-IP:8080/
You can also see the details about the endpoint using kubectl get ep
Thanks,

How can I access to services outside the cluster using kubectl proxy?

When we spin up a cluster with kubeadm in kubernetes, and the service's .yaml file looks like this :
apiVersion: v1
kind: Service
metadata:
name: neo4j
labels:
app: neo4j
component: core
spec:
clusterIP: None
ports:
- port: 7474
targetPort: 7474
name: browser
- port: 6362
targetPort: 6362
name: backup
selector:
app: neo4j
component: core
After all pods and services run, I do kubectl proxy and it says :
Starting to serve on 127.0.0.1:8001
So when I want to access to this service like :
curl localhost:8001/api/
it's just reachable inside the cluster! How can I reach to services outside the cluster?
You should expose your service using NodePort:
apiVersion: v1
kind: Service
metadata:
name: neo4j
labels:
app: neo4j
component: core
spec:
externalTrafficPolicy: Local
type: NodePort
ports:
- port: 7474
targetPort: 7474
name: browser
- port: 6362
targetPort: 6362
name: backup
selector:
app: neo4j
component: core
Now if you describe your service using
kubectl describe svc neo4j
You will get a nodeport value which will be in between 30000-32767 and you can access your service from outside the cluster using
curl http://<node_ip>:<node_port>
Hope this helps.
EDIT: Yes you can't directly use clusterIP: None in case of exposing service through NodePort. Now clusterIP: None means there is no internal load balancing done by kubernetes and for that we can also use externalTrafficPolicy=Local in service definition.
Alternatively, you might be able to use an ingress to route traffic to the correct Service.