unable to access the application deployed on kubernetes cluster using kubernetes playground - kubernetes

I have a 3 node cluster created on kubernetes playground
The 3 nodes as seen on the UI are :
192.168.0.13 : Master
192.168.0.12 : worker
192.168.0.11 : worker
I have a front end app connected to backend mysql.
The deployment and service definition for front end is as below.
apiVersion: v1
kind: Service
metadata:
name: springboot-app
spec:
type: NodePort
ports:
- port: 8080
selector:
app: springboot-app
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: springboot-app
spec:
replicas: 3
selector:
matchLabels:
app: springboot-app
template:
metadata:
labels:
app: springboot-app
spec:
containers:
- image: chinmayeepdas/springbootapp:1.0
name: springboot-app
env:
- name: DATABASE_HOST
value: demo-mysql
- name: DATABASE_NAME
value: chinmayee
- name: DATABASE_USER
value: root
- name: DATABASE_PASSWORD
value: root
- name: DATABASE_PORT
value: "3306"
ports:
- containerPort: 8080
name: app-port
My PODs for UI and backend are up and running.
[node1 ~]$ kubectl describe service springboot-app
Name: springboot-app
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=springboot-app
Type: NodePort
IP: 10.96.187.226
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30373/TCP
Endpoints: 10.32.0.2:8080,10.32.0.3:8080,10.40.0.3:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Now when i do,
http://192.168.0.12:30373/employee/getAll
I dont see any result. I get This site can’t be reached
What IP address i have to give in the URL?

try this solution
kubectl proxy --address 0.0.0.0
Then access it as http://localhost:30373/employee/getAll
or maybe:
http://localhost:8080/employee/getAll
let me know if this fixes the access issue and which one works.

Related

Service Endpoint not created although container port is online

I have a simple Service that connects to a port from a container inside a pod.
All pretty straight forward.
This was working too but out of nothing, the endpoint is not created for port 18080.
So I began to investigate and looked at this question but nothing that helped there.
The container is up, no errors/events, all green.
I can also call the request with the pods ip:18080 from an internal container, so the endpoint should be reachable for the service.
I can't see errors in:
journalctl -u snap.microk8s.daemon-*
I am using microk8s v1.20.
Where else can I debug this situation?
I am out of tools.
Service:
kind: Service
apiVersion: v1
metadata:
name: aedi-service
spec:
selector:
app: server
ports:
- name: aedi-host-ws #-port
port: 51056
protocol: TCP
targetPort: host-ws-port
- name: aedi-http
port: 18080
protocol: TCP
targetPort: fcs-http
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: server-deployment
labels:
app: server
spec:
replicas: 1
selector:
matchLabels:
app: server
template:
metadata:
labels:
app: server
srv: os-port-mapping
name: dns-service
spec:
hostname: fcs
containers:
- name: fcs
image: '{{$fcsImage}}'
imagePullPolicy: {{$pullPolicy}}
ports:
- containerPort: 18080
Service Description:
Name: aedi-service
Namespace: fcs-only
Labels: app.kubernetes.io/managed-by=Helm
Annotations: meta.helm.sh/release-name: fcs-only
meta.helm.sh/release-namespace: fcs-only
Selector: app=server
Type: ClusterIP
IP Families: <none>
IP: 10.152.183.247
IPs: 10.152.183.247
Port: aedi-host-ws 51056/TCP
TargetPort: host-ws-port/TCP
Endpoints: 10.1.116.70:51056
Port: aedi-http 18080/TCP
TargetPort: fcs-http/TCP
Endpoints:
Session Affinity: None
Events: <none>
Pod Info:
NAME READY STATUS RESTARTS AGE LABELS
server-deployment-76b5789754-q48xl 6/6 Running 0 23m app=server,name=dns-service,pod-template-hash=76b5789754,srv=os-port-mapping
kubectl get svc aedi-service -o wide:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
aedi-service ClusterIP 10.152.183.247 <none> 443/TCP,1884/TCP,51052/TCP,51051/TCP,51053/TCP,51056/TCP,18080/TCP,51055/TCP 34m app=server
Your service spec refer to a port named "fcs-http" but it was not declared in the deployment. Try:
apiVersion: apps/v1
kind: Deployment
metadata:
name: server-deployment
...
ports:
- containerPort: 18080
name: fcs-http # <-- add the name here
...
Wrong service configuration
- name: aedi-http
port: 18080 -----> which expose service, it has not related with container port.
protocol: TCP
targetPort: fcs-http -----> Here should be 18080, correspond to container port
If you still want to use name instead of port number, you should define name too in deployment yaml, like below:
containers:
- name: fcs
image: '{{$fcsImage}}'
imagePullPolicy: {{$pullPolicy}}
ports:
- containerPort: 18080
name: fcs-http

mentioned ip to endpoints are not getting configured in k8s

i am trying to add IPs manually using endpoint object in yaml. however minikube cluster is getting its defaults ips of endpoints instead of mention in the yaml file. why?
yamlfile:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx-app
spec:
replicas: 3
selector:
matchLabels:
app: nginx-app
template:
metadata:
labels:
app: nginx-app
spec:
containers:
- name: nginx-container
image: nginx:1.16
ports:
- containerPort: 80
---
apiVersion: v1
kind: Endpoints
metadata:
name: nginx-service
subsets:
- ports:
- port: 80
addresses:
- ip: 172.17.0.11 ---> configured ip
- ip: 172.17.0.12 ---> configured ip
- ip: 172.17.0.13 ---> configured ip
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
selector:
app: nginx-app
ports:
- protocol: TCP
nodePort: 30464
port: 90
targetPort: 80
ips in endpoint output: (see 172.17.0.6, 172.17.0.7 and 172.17.0.8 while i have given 172.17.0.11, 172.17.0.12 and 172.17.0.13 in yaml)
/home/ravi/k8s>kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 192.168.49.2:8443 36h
nginx-service 172.17.0.6:80,172.17.0.7:80,172.17.0.8:80 5m59s
I have tried replicating your issue and got the configured IP addresses for endpoints.
The changes might have occurred due to the namespaces also, Once check it .

Service not reachable in Kubernetes

For my research project, I need to deploy Graylog in our Kubernetes infrastructure. Graylog uses MongoDB which is deployed on the same cluster.
kubectl describe svc -n mongodb
Name: mongodb
Namespace: mongodb
Labels: app=mongodb
Annotations: Selector: app=mongodb
Type: ClusterIP
IP: 10.109.195.209
Port: 27017 27017/TCP
TargetPort: 27017/TCP
Endpoints: 10.244.2.21:27017
Session Affinity: None
Events: <none>
I use the deployment script bellow to deploy Graylog:
apiVersion: v1
kind: Service
metadata:
name: graylog3
spec:
type: NodePort
selector:
app: graylog-deploy
ports:
- name: "9000"
port: 9000
targetPort: 9000
nodePort: 30003
- name: "12201"
port: 12201
targetPort: 12201
nodePort: 30004
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: graylog-deploy
labels:
app: graylog-deploy
spec:
replicas: 1
selector:
matchLabels:
name: graylog-deploy
template:
metadata:
labels:
name: graylog-deploy
app: graylog-deploy
spec:
containers:
- name: graylog3
image: graylog/graylog:3.0
env:
- name: GRAYLOG_PASSWORD_SECRET
value: g0ABP9MJnWCjWtBX9JHFgjKAmD3wGXP3E0JQNOKlquDHnCn5689QAF8rRL66HacXLPA6fvwMY8BZoVVw0JqHnSAZorDDOdCk
- name: GRAYLOG_ROOT_PASSWORD_SHA2
value: 8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
- name: GRAYLOG_HTTP_EXTERNAL_URI
value: http://Master_IP:30003/
- name: GRAYLOG_ELASTICSEARCH_HOSTS
value: http://elasticsearch:9200
- name: GRAYLOG_MONGODB_URI
value: mongodb://mongodb:27017/graylog
ports:
- containerPort: 9000
- containerPort: 12201
Graylog is throwing an exception:
Caused by: java.net.UnknownHostException: mongodb
But when deploying it using the MongoDB IP, it runs successfully.
I am new to Kubernetes and don't know what I am doing wrong here.
Thanks.
Since your mongodb is running in a different namespace called mongodb , you need to provide the FQDN for the service in that namespace. Your graylog is in default namespace.
So to access the mongodb service in mongodb namespace change your yaml as below
- name: GRAYLOG_MONGODB_URI
value: mongodb://mongodb.mongodb:27017/graylog
Here is link that might provide more insights.

Kubernetes deployment not publicly accesible

im trying to access a deloyment on our Kubernetes cluster on Azure. This is a Azure Kubernetes Service (AKS). Here are the configuration files for the deployment and the service that should expose the deployment.
Configurations
apiVersion: apps/v1
kind: Deployment
metadata:
name: mira-api-deployment
spec:
replicas: 1
selector:
matchLabels:
app: mira-api
template:
metadata:
labels:
app: mira-api
spec:
containers:
- name: backend
image: registry.gitlab.com/izit/mira-backend
ports:
- containerPort: 8080
name: http
protocol: TCP
imagePullSecrets:
- name: regcred
apiVersion: v1
kind: Service
metadata:
name: mira-api-service
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
run: mira-api
When I check the cluster after applying these configurations I, I see the pod running correctly. Also the service is created and has public IP assigned.
After this deployment I don't see any requests getting handled. I get a error message in my browser saying the site is inaccessible. Any ideas what I could have configured wrong?
Your service selector labels and pod labels do not match.
You have app: mira-api label in deployment's pod template but have run: mira-api in service's label selector.
Change your service selector label to match the pod label as follows.
apiVersion: v1
kind: Service
metadata:
name: mira-api-service
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: mira-api
To make sure your service is selecting the backend pods or not, you can run kubectl describe svc <svc name> command and check if it has any Endpoints listed.
# kubectl describe svc postgres
Name: postgres
Namespace: default
Labels: app=postgres
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"postgres"},"name":"postgres","namespace":"default"},"s...
Selector: app=postgres
Type: ClusterIP
IP: 10.106.7.183
Port: default 5432/TCP
TargetPort: 5432/TCP
Endpoints: 10.244.2.117:5432 <------- This line
Session Affinity: None
Events: <none>

Kubernetes Load Balancer Type not responding to External IP Address

I've been trying to use the below to expose my application to a public IP. This is being done on Azure. The public IP is generated but when I browse to it I get nothing.
This is a Django app which runs the container on Port 8000. The service runs at Port 80 at the moment but even if I configure the service to run at port 8000 it still doesn't work.
Is there something wrong with the way my service is defined?
apiVersion: v1
kind: Service
metadata:
name: web
labels:
app: hmweb
spec:
ports:
- port: 80
selector:
app: hmweb
tier: frontend
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hmweb-deployment
labels:
app: hmweb
spec:
replicas: 1
selector:
matchLabels:
app: hmweb
template:
metadata:
labels:
app: hmweb
spec:
containers:
- name: hmweb
image: nw_webimage
envFrom:
- configMapRef:
name: new-config
command: ["/bin/sh","-c"]
args: ["gunicorn saleor.wsgi -w 2 -b 0.0.0.0:8000"]
ports:
- containerPort: 8000
imagePullSecrets:
- name: key
Output of kubectl describe service web (name of service:)
Name: web
Namespace: default
Labels: app=hmweb
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"hmweb"},"name":"web","namespace":"default"},"spec":{"ports":[{"port":...
Selector: app=hmweb
Type: LoadBalancer
IP: 10.0.86.131
LoadBalancer Ingress: 13.69.127.16
Port: <unset> 80/TCP
TargetPort: 8000/TCP
NodePort: <unset> 31827/TCP
Endpoints: 10.244.0.112:8000
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 8m service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 7m service-controller Ensured load balancer
The reason behind that is your service has two selector app: hmweb and tier: frontend and your deployment pods has only single label named app: hmweb. Hence when your service is created it could not find the pods which has both the labels and doesn't connect to any pods. Also, if you have container running on 8000 port then you must define targetPort which has the value of container port on which container is running, else it will take both targetPort and port value as same you defined in your service i.e. port: 80
The correct yaml for your deployment is:
apiVersion: v1
kind: Service
metadata:
name: web
labels:
app: hmweb
spec:
ports:
- port: 80
targetPort: 8000
protocol: TCP
selector:
app: hmweb
type: LoadBalancer
Hope this helps.