how to setup ingress for kubernetes GKE - kubernetes

I have setup application with statefulset
# Simple deployment used to deploy and manage the app in nigelpoulton/getting-started-k8s:1.0
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: coredeploy
labels:
app: core123
spec:
replicas: 1
# minReadySeconds: 10
updateStrategy:
type: RollingUpdate
rollingUpdate:
partition: 0
# maxSurge: 1
selector:
matchLabels:
app: core123
serviceName: core123
template:
metadata:
labels:
app: core123
spec:
terminationGracePeriodSeconds: 1
containers:
- name: hello
image: docker-registry.myregistry.com:5000/core_centos:LMS-130022
imagePullPolicy: Always
ports:
- containerPort: 8008
readinessProbe:
tcpSocket:
port: 8008
periodSeconds: 1
This is my service
apiVersion: v1
kind: Service
metadata:
name: service-core
spec:
selector:
app: core123
type: NodePort
ports:
- name: nodeportcore
protocol: TCP
port: 9988
targetPort: 8008
This is my ingress
apiVersion: "networking.k8s.io/v1"
kind: "Ingress"
metadata:
name: "testIngress"
spec:
rules:
- http:
paths:
- path: "/"
backend:
service:
name: "service-core"
port:
number: 9988
pathType: "ImplementationSpecific"
After i apply ingress manifest file. My application is running but not login => Logs login successful but still back to login screen. After check i recognize url of my application when i run it in localhost on-premise (Not container this url in container is the same)
http://localhost:8008/#/public/login
http://localhost:8008/#/user/settings
http://localhost:8008/#/user/dashboard/overview
http://localhost:8008/#/user/history/processing
http://localhost:8008/#/user/policy/template
It url start with # and then url name as /public/login, /user/settings, /user/dashboard/overview, /#//
=> My question how i setup correctly ingress to run with my application

Related

websocket connection between services in kubernetes

I have two services in Kubernetes (k3s) one is backend and another is frontend and I need the transfer data from backend to fronted through websocket. When I debug in docker-compose it works. So when run in docker-compose on backend I have (python):
import websockets
...
start_server = websockets.serve(server, "0.0.0.0", 8001)
...
On front I have (html):
...
var ws = new WebSocket('ws://localhost:8001');
...
And I have function onclose:
ws.onclose = function (e) {
document.getElementById("textArea").innerHTML = 'Socket is closed. Reconnect will be attempted in 5 seconds...' + e.reason;
setTimeout(function () {
connect();
}, 5000);
};
And it's works as should.
When I use k3s I get Socket is closed. Reconnect will be attempted in 5 seconds...
There is my backend (app) deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
namespace: my-namespace
labels:
app: app
spec:
replicas: 1
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
name: app
spec:
containers:
- args:
- python3.9
- main.py
image: app
imagePullPolicy: Never
name: app
ports:
- name: app
containerPort: 8001
And service manifest for backend:
apiVersion: v1
kind: Service
metadata:
name: app-service
namespace: my-namespace
labels:
app: app-service
spec:
type: ClusterIP
ports:
- port: 8001
targetPort: app
selector:
app: app
My frontend deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-deployment
namespace: my-namespace
labels:
app: frontend
spec:
replicas: 1
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
name: frontend
spec:
containers:
- args:
- python3.9
- -m
- uvicorn
- main:app
- --reload
- --port
- "8000"
- --host
- 0.0.0.0
image: frontend
imagePullPolicy: Never
name: frontend
ports:
- name: frontend
containerPort: 8000
Service for frontend:
apiVersion: v1
kind: Service
metadata:
name: frontend-service
namespace: my-namespace
labels:
app: frontend
spec:
type: ClusterIP
ports:
- protocol: TCP
port: 8000
targetPort: frontend
selector:
app: frontend
I have Ingress Controller (traefic) that routes traffic to cluster:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: traefik-ingress
namespace: my-namespace
annotations:
kubernetes.io/ingress.class: traefik
ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: my-dns.ca
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 8000
I changed the code in html in this way:
var ws = new WebSocket('ws://app-service.my-namespace.svc.cluster.local:8001');
but get ````Socket is closed. Reconnect will be attempted in 5 seconds... ```
How to get connection between services or debug connection?

GCE ingress resource is taking too long to receive an IP address in GKE cluster. What could be the reason?

I am trying to deploy a sample application on GKE cluster. All the resources are getting created successfully except the ingress resource which is taking around 15-20 minutes to receive an ip address. By this time application times out and get in errored state. The ideal time to assign the IP addr is 2-3 minutes. Can anyone help on the issue how to debug it?
This is happening specific to a cluster while the same ingress getting the ip within 2 minutes in other clusters in GKE.
Below the manifest files I am using to deploy app:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deployment
spec:
selector:
matchLabels:
app: hello
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: gcr.io/google-samples/hello-app:2.0
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: hello-service
spec:
type: ClusterIP
selector:
app: hello
ports:
- port: 8000
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: zone-printer-deployment
spec:
selector:
matchLabels:
app: zone-printer
template:
metadata:
labels:
app: zone-printer
spec:
containers:
- name: zone-printer
image: gcr.io/google-samples/zone-printer:0.2
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: zone-printer-service
spec:
type: ClusterIP
selector:
app: zone-printer
ports:
- port: 9000
targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: awesome-ingress
annotations:
kubernetes.io/ingress.class: "gce-internal"
spec:
defaultBackend:
service:
name: hello-service
port:
number: 8000
rules:
- http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: hello-service
port:
number: 8000
- path: /zone-printer
pathType: ImplementationSpecific
backend:
service:
name: zone-printer-service
port:
number: 9000

Accessing application inside a kubernetes pod from an another application in a different pod

I have a kubernetes cluster having two deployments ui-service-app and user-service-app. Both of the deployments are exposed through Cluster IP services namely ui-service-svc and user-service-svc. In addition there is a Ingress for accessing both of my applications inside those deployments from outside the cluster.
Now I want to make a api call from my application inside ui-service-app to user-service-app. Currently I am using the ingress-ip/user to do so. But there should be some way to do this internally?
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service-app
labels:
app: user-service-app
spec:
replicas: 1
selector:
matchLabels:
app: user-service-app
template:
metadata:
labels:
app: user-service-app
spec:
containers:
- name: user-service-app
image: <MY-IMAGE-URL>
imagePullPolicy: Always
ports:
- containerPort: 3000
livenessProbe:
httpGet:
path: /ping
port: 3000
readinessProbe:
httpGet:
path: /ping
port: 3000
---
apiVersion: "v1"
kind: "Service"
metadata:
name: "user-service-svc"
namespace: "default"
labels:
app: "user-service-app"
spec:
type: "ClusterIP"
selector:
app: "user-service-app"
ports:
- protocol: "TCP"
port: 80
targetPort: 3000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ui-service-app
labels:
app: ui-service-app
spec:
replicas: 1
selector:
matchLabels:
app: ui-service-app
template:
metadata:
labels:
app: ui-service-app
spec:
containers:
- name: ui-service-app
image: <MY-IMAGE-URL>
imagePullPolicy: Always
ports:
- containerPort: 3000
---
apiVersion: "v1"
kind: "Service"
metadata:
name: "ui-service-svc"
namespace: "default"
labels:
app: "ui-service-app"
spec:
type: "ClusterIP"
selector:
app: "ui-service-app"
ports:
- protocol: "TCP"
port: 80
targetPort: 3000
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: awesome-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
defaultBackend:
service:
name: ui-service-svc
port:
number: 80
rules:
- http:
paths:
- path: /login
pathType: Prefix
backend:
service:
name: ui-service-svc
port:
number: 80
- path: /user(/|$)(.*)
pathType: Prefix
backend:
service:
name: user-service-svc
port:
number: 80
UPDATE 1:
THIS IS THE ERROR PAGE WHEN I CHANGE THE URL IN REACT APP TO HTTP://USER-SERVICE-SVC
Use the service name of the associated service.
From any other pod in the same namespace, the hostname user-service-svc will map to the Service you've defined, so http://user-service-svc would connect you to the web server of the user-service-app Deployment (no port specified, because your Service is mapping port 80 to container port 3000).
From another namespace, you can use the hostname <service>.<namespace>.svc.cluster.local, but that's not relevant to what you're doing here.
See the Service documentation for more details.

ClusterIP service Not accessible from within the cluster pods

I have got 2 deployments in my cluster UI and USER. Both of these are exposed by Cluster IP service. There is an ingress which makes both the services publicly accessible.
Now when I do "kubectl exec -it UI-POD -- /bin/sh" and then try to "ping USER-SERVICE-CLUSTER-IP:PORT" it doesn't work.
All I get is No packet returned i.e. a failure message.
Attaching my .yml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service-app
labels:
app: user-service-app
spec:
replicas: 1
selector:
matchLabels:
app: user-service-app
template:
metadata:
labels:
app: user-service-app
spec:
containers:
- name: user-service-app
image: <MY-IMAGE-URL>
imagePullPolicy: Always
ports:
- containerPort: 3000
livenessProbe:
httpGet:
path: /ping
port: 3000
readinessProbe:
httpGet:
path: /ping
port: 3000
---
apiVersion: "v1"
kind: "Service"
metadata:
name: "user-service-svc"
namespace: "default"
labels:
app: "user-service-app"
spec:
type: "ClusterIP"
selector:
app: "user-service-app"
ports:
- protocol: "TCP"
port: 80
targetPort: 3000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ui-service-app
labels:
app: ui-service-app
spec:
replicas: 1
selector:
matchLabels:
app: ui-service-app
template:
metadata:
labels:
app: ui-service-app
spec:
containers:
- name: ui-service-app
image: <MY-IMAGE-URL>
imagePullPolicy: Always
ports:
- containerPort: 3000
---
apiVersion: "v1"
kind: "Service"
metadata:
name: "ui-service-svc"
namespace: "default"
labels:
app: "ui-service-app"
spec:
type: "ClusterIP"
selector:
app: "ui-service-app"
ports:
- protocol: "TCP"
port: 80
targetPort: 3000
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: awesome-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
defaultBackend:
service:
name: ui-service-svc
port:
number: 80
rules:
- http:
paths:
- path: /login
pathType: Prefix
backend:
service:
name: ui-service-svc
port:
number: 80
- path: /user(/|$)(.*)
pathType: Prefix
backend:
service:
name: user-service-svc
port:
number: 80
Ping operates by means of Internet Control Message Protocol (ICMP) packets. This is not what your service is serving. You can try curl USER-SERVICE-CLUSTER-IP/ping or curl http://user-service-svc/ping within your UI pod.

Google Cloud Platform (GCP) Ingress unhealthy backend

I have the following deployment running in Google Cloud Platform (GCP):
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: mybackend
labels:
app: backendweb
spec:
replicas: 1
selector:
matchLabels:
app: backendweb
tier: web
template:
metadata:
labels:
app: backendweb
tier: web
spec:
containers:
- name: mybackend
image: eu.gcr.io/teststuff/backend:latest
ports:
- containerPort: 8081
This uses the following service:
apiVersion: v1
kind: Service
metadata:
name: mybackend
labels:
app: backendweb
spec:
type: NodePort
selector:
app: backendweb
tier: web
ports:
- port: 8081
targetPort: 8081
Which uses this ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: backend-ip
labels:
app: backendweb
spec:
backend:
serviceName: mybackend
servicePort: 8081
When I spin all this up in Google Cloud Platform however, I get the following error message on my ingress:
All backend services are in UNHEALTHY state
I've looked through my pod logs with no indication about the problem. Grateful for any advice!
Most likely this problem is caused by not returning 200 on route '/' for your pod. Please check your pod configurations. If you don't want to return 200 at route '/', you could add a readiness rule for health check like this:
readinessProbe:
httpGet:
path: /healthz
port: 80
initialDelaySeconds: 5
periodSeconds: 5