Cannot acces Kubernetes service outside cluster - kubernetes

I have created a Kubernetes service for my deployment and using a load balancer an external IP has been assigned along with the node port but I am unable to access the service from outside the cluster using the external IP and nodeport.
The service has been properly created and is up and running.
Below is my deployment:-
apiVersion: apps/v1
kind: Deployment
metadata:
name: dev-portal
labels:
app: dev-portal
spec:
replicas: 1
selector:
matchLabels:
app: dev-portal
template:
metadata:
labels:
app: dev-portal
spec:
containers:
- name: dev-portal
image: bhavesh/ti-portal:develop
imagePullPolicy: Always
resources:
requests:
memory: "512Mi"
cpu: "1"
limits:
memory: "1G"
cpu: "1"
ports:
- containerPort: 9000
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
name: dev-portal
labels:
app: dev-portal
spec:
selector:
app: dev-portal
ports:
- protocol: TCP
port: 9000
targetPort: 9000
nodePort: 30429
type: LoadBalancer
For some reason, I am unable to access my service from outside and a message 'Refused to connect' is shown.
Update
The service is described using kubectl describe below:
Name: trakinvest-dev-portal
Namespace: default
Labels: app=trakinvest-dev-portal
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"trakinvest-dev-portal"},"name":"trakinvest-dev-portal","...
Selector: app=trakinvest-dev-portal
Type: LoadBalancer
IP: 10.245.185.62
LoadBalancer Ingress: 139.59.54.108
Port: <unset> 9000/TCP
TargetPort: 9000/TCP
NodePort: <unset> 30429/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>

Related

Openshift - accessing non http port

I have a fairly simple (SpringBoot) app that listens on the following port:
8080 - for HTTP (swagger page)
1141 - non HTTP traffic. It is for FIX (https://en.wikipedia.org/wiki/Financial_Information_eXchange) port. i.e. direct socket to socket, TCP/IP port. The FIX engine used is QuickfixJ.
I'm trying to deploy this app on OpenShift cluster. The configuration looks like below:
Here are the YAMLs I have:
Deployment config:
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
labels:
app: pricing-sim-depl
name: pricing-sim-deployment
namespace: my-namespace
spec:
replicas: 1
selector:
app: pricing-sim-depl
strategy:
resources:
limits:
cpu: 200m
memory: 1024Mi
requests:
cpu: 100m
memory: 512Mi
type: Recreate
template:
metadata:
labels:
app: pricing-sim-depl
spec:
containers:
- image: >-
my-docker-registry/alex/pricing-sim:latest
name: pricing-sim-pod
ports:
- containerPort: 8080
protocol: TCP
- containerPort: 1141
protocol: TCP
tty: true
resources:
limits:
cpu: 200m
memory: 1024Mi
requests:
cpu: 100m
memory: 512Mi
Then I created a ClusterIP service for accessing the HTTP Swagger page:
apiVersion: v1
kind: Service
metadata:
labels:
app: pricing-sim-sv
name: pricing-sim-service
namespace: my-namespace
spec:
ports:
- name: swagger-port
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: pricing-sim-depl
type: ClusterIP
and also the Router for accessing it:
apiVersion: route.openshift.io/v1
kind: Route
metadata:
labels:
app: pricing-sim-tn-swagger
name: pricing-sim-tunnel-swagger
namespace: my-namespace
spec:
host: pricing-sim-swagger-my-namespace.apps.cpaas.service.test
port:
targetPort: swagger-port
to:
kind: Service
name: pricing-sim-service
weight: 100
wildcardPolicy: None
The last component is a NodePort service to access the FIX port:
apiVersion: v1
kind: Service
metadata:
labels:
app: pricing-sim-esp-service
name: pricing-sim-esp-service
namespace: my-namespace
spec:
type: NodePort
ports:
- port: 1141
protocol: TCP
targetPort: 1141
nodePort: 30005
selector:
app: pricing-sim-depl
So far, the ClusterIP & Router works fine. I can access the swagger page at
http://fxc-fix-engine-swagger-my-namespace.apps.cpaas.service.test
However, I'm not sure how I can access the FIX port (defined by NodePort service above). First, I cant use Router - as it is not a HTTP endpoint (and thats why I defined it as NodePort).
Looking at OpenShift page, I can see the following for 'pricing-sim-esp-service':
Selectors:
app=pricing-sim-depl
Type: NodePort
IP: 172.30.11.238
Hostname: pricing-sim-esp-service.my-namespace.svc.cluster.local
Session affinity: None
Traffic (one row)
Route/Node Port: 30005
Service Port: 1141/TCP
Target Port: 1141
Hostname: none
TLS Termination: none
BTW.. i'm following the suggestion on this StackOverflow post: OpenShift :: How do we enable traffic into pod on a custom port (non-web / non-http)
I've also tried using LoadBalancer service type. Which actually gives external IP on the service page above. But that 'external IP' doesnt seem to be accessible from my local PC either.
The version of openshift we are running is:
OpenShift Master: v3.11.374
Kubernetes Master: v1.11.0+d4cacc0
OpenShift Web Console: 3.11.374-1-3365aaf
Thank you in advance!

Kubernetes is always forwarding the request to same pod

I have a Kubernetes cluster with 1 control-plane and 1 worker, the worker has in it 3 pods. The pods and service with Type: NodePort are on the same node. I was expecting the service to load balance the requests between the pods but looks like all the requests are always getting forwarded to only one pod.
apiVersion: v1
kind: Service
metadata:
name: web-svc
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 30002
selector:
app: web
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
labels:
app: web
spec:
selector:
matchLabels:
app: web
replicas: 3
template:
metadata:
labels:
app: web
spec:
containers:
- name: web-app
image: webimage
ports:
- containerPort: 80
imagePullPolicy: Never
resources:
limits:
cpu: "0.5"
requests:
cpu: "0.5"
~
This is expected behavior if your requests have persistent TCP connection. Try adding "connection":"close" in your HTTP header.

Kubernetes Load Balancer Type not responding to External IP Address

I've been trying to use the below to expose my application to a public IP. This is being done on Azure. The public IP is generated but when I browse to it I get nothing.
This is a Django app which runs the container on Port 8000. The service runs at Port 80 at the moment but even if I configure the service to run at port 8000 it still doesn't work.
Is there something wrong with the way my service is defined?
apiVersion: v1
kind: Service
metadata:
name: web
labels:
app: hmweb
spec:
ports:
- port: 80
selector:
app: hmweb
tier: frontend
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hmweb-deployment
labels:
app: hmweb
spec:
replicas: 1
selector:
matchLabels:
app: hmweb
template:
metadata:
labels:
app: hmweb
spec:
containers:
- name: hmweb
image: nw_webimage
envFrom:
- configMapRef:
name: new-config
command: ["/bin/sh","-c"]
args: ["gunicorn saleor.wsgi -w 2 -b 0.0.0.0:8000"]
ports:
- containerPort: 8000
imagePullSecrets:
- name: key
Output of kubectl describe service web (name of service:)
Name: web
Namespace: default
Labels: app=hmweb
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"hmweb"},"name":"web","namespace":"default"},"spec":{"ports":[{"port":...
Selector: app=hmweb
Type: LoadBalancer
IP: 10.0.86.131
LoadBalancer Ingress: 13.69.127.16
Port: <unset> 80/TCP
TargetPort: 8000/TCP
NodePort: <unset> 31827/TCP
Endpoints: 10.244.0.112:8000
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 8m service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 7m service-controller Ensured load balancer
The reason behind that is your service has two selector app: hmweb and tier: frontend and your deployment pods has only single label named app: hmweb. Hence when your service is created it could not find the pods which has both the labels and doesn't connect to any pods. Also, if you have container running on 8000 port then you must define targetPort which has the value of container port on which container is running, else it will take both targetPort and port value as same you defined in your service i.e. port: 80
The correct yaml for your deployment is:
apiVersion: v1
kind: Service
metadata:
name: web
labels:
app: hmweb
spec:
ports:
- port: 80
targetPort: 8000
protocol: TCP
selector:
app: hmweb
type: LoadBalancer
Hope this helps.

unable to access the application deployed on kubernetes cluster using kubernetes playground

I have a 3 node cluster created on kubernetes playground
The 3 nodes as seen on the UI are :
192.168.0.13 : Master
192.168.0.12 : worker
192.168.0.11 : worker
I have a front end app connected to backend mysql.
The deployment and service definition for front end is as below.
apiVersion: v1
kind: Service
metadata:
name: springboot-app
spec:
type: NodePort
ports:
- port: 8080
selector:
app: springboot-app
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: springboot-app
spec:
replicas: 3
selector:
matchLabels:
app: springboot-app
template:
metadata:
labels:
app: springboot-app
spec:
containers:
- image: chinmayeepdas/springbootapp:1.0
name: springboot-app
env:
- name: DATABASE_HOST
value: demo-mysql
- name: DATABASE_NAME
value: chinmayee
- name: DATABASE_USER
value: root
- name: DATABASE_PASSWORD
value: root
- name: DATABASE_PORT
value: "3306"
ports:
- containerPort: 8080
name: app-port
My PODs for UI and backend are up and running.
[node1 ~]$ kubectl describe service springboot-app
Name: springboot-app
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=springboot-app
Type: NodePort
IP: 10.96.187.226
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30373/TCP
Endpoints: 10.32.0.2:8080,10.32.0.3:8080,10.40.0.3:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Now when i do,
http://192.168.0.12:30373/employee/getAll
I dont see any result. I get This site can’t be reached
What IP address i have to give in the URL?
try this solution
kubectl proxy --address 0.0.0.0
Then access it as http://localhost:30373/employee/getAll
or maybe:
http://localhost:8080/employee/getAll
let me know if this fixes the access issue and which one works.

How do I run traefik behind Kubernetes on Google Container Engine?

So I have Traefik "running" on Kubernetes:
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: '{"kind":"Service","apiVersion":"v1","metadata":{"name":"traefik","namespace":"kube-system","creationTimestamp":null,"labels":{"k8s-app":"traefik-ingress-lb"}},"spec":{"ports":[{"name":"http","port":80,"targetPort":80},{"name":"https","port":443,"targetPort":443}],"selector":{"k8s-app":"traefik-ingress-lb"},"type":"LoadBalancer"},"status":{"loadBalancer":{}}}'
creationTimestamp: 2016-11-30T23:15:49Z
labels:
k8s-app: traefik-ingress-lb
name: traefik
namespace: kube-system
resourceVersion: "9672"
selfLink: /api/v1/namespaces/kube-system/services/traefik
uid: ee07b957-b752-11e6-88fa-42010af00083
spec:
clusterIP: 10.11.251.200
ports:
- name: http
nodePort: 30297
port: 80
protocol: TCP
targetPort: 80
- name: https
nodePort: 30247
port: 443
protocol: TCP
targetPort: 443
selector:
k8s-app: traefik-ingress-lb
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: # IP THAT IS ALLOCATED BY k8s BUT NOT ASSIGNED
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
kubectl.kubernetes.io/last-applied-configuration: '###'
creationTimestamp: 2016-11-30T22:59:07Z
generation: 2
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-controller
namespace: kube-system
resourceVersion: "23438"
selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/traefik-ingress-controller
uid: 9919ff46-b750-11e6-88fa-42010af00083
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-ingress-lb
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
version: v1.1
spec:
containers:
- args:
- --web
- --kubernetes
- --configFile=/etc/config/traefik.toml
- --logLevel=DEBUG
image: gcr.io/myproject/traefik
imagePullPolicy: Always
name: traefik-ingress-lb
ports:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 8080
protocol: TCP
resources:
limits:
cpu: 200m
memory: 30Mi
requests:
cpu: 100m
memory: 20Mi
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /etc/config
name: config-volume
- mountPath: /etc/traefik
name: traefik-volume
dnsPolicy: ClusterFirst
restartPolicy: Always
securityContext: {}
terminationGracePeriodSeconds: 60
volumes:
- configMap:
defaultMode: 420
name: traefik-config
name: config-volume
- emptyDir: {}
name: traefik-volume
status:
observedGeneration: 2
replicas: 1
unavailableReplicas: 1
updatedReplicas: 1
My problem is that the external IP assigned by Kubernetes does not actually forward to Traefik; in fact, the IP assigned does not even show up in my Google Cloud Platform console. How do I get Traefik working with a load-balancer on Google Container Engine?
If you do
kubectl describe svc/traefik
Do the endpoints match the IPs from:
kubectl get po -lk8s-app=traefik-ingress-lb -o wide
What happens when you do hit the LB IP? Does it load indefinitely, or do you get a different service?
There was a new fork of Traefik that supposedly fixes some kubernetes issues. However, I did further reading and talking about Traefik and uncovered some (recent) advisories of potential instability. While this is to be expected in new software, I decided to switch to NGINX to handle my reverse proxy. It's been working wonderfully, so I'm going to go ahead and close this question.