websocket connection between services in kubernetes - kubernetes

I have two services in Kubernetes (k3s) one is backend and another is frontend and I need the transfer data from backend to fronted through websocket. When I debug in docker-compose it works. So when run in docker-compose on backend I have (python):
import websockets
...
start_server = websockets.serve(server, "0.0.0.0", 8001)
...
On front I have (html):
...
var ws = new WebSocket('ws://localhost:8001');
...
And I have function onclose:
ws.onclose = function (e) {
document.getElementById("textArea").innerHTML = 'Socket is closed. Reconnect will be attempted in 5 seconds...' + e.reason;
setTimeout(function () {
connect();
}, 5000);
};
And it's works as should.
When I use k3s I get Socket is closed. Reconnect will be attempted in 5 seconds...
There is my backend (app) deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
namespace: my-namespace
labels:
app: app
spec:
replicas: 1
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
name: app
spec:
containers:
- args:
- python3.9
- main.py
image: app
imagePullPolicy: Never
name: app
ports:
- name: app
containerPort: 8001
And service manifest for backend:
apiVersion: v1
kind: Service
metadata:
name: app-service
namespace: my-namespace
labels:
app: app-service
spec:
type: ClusterIP
ports:
- port: 8001
targetPort: app
selector:
app: app
My frontend deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-deployment
namespace: my-namespace
labels:
app: frontend
spec:
replicas: 1
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
name: frontend
spec:
containers:
- args:
- python3.9
- -m
- uvicorn
- main:app
- --reload
- --port
- "8000"
- --host
- 0.0.0.0
image: frontend
imagePullPolicy: Never
name: frontend
ports:
- name: frontend
containerPort: 8000
Service for frontend:
apiVersion: v1
kind: Service
metadata:
name: frontend-service
namespace: my-namespace
labels:
app: frontend
spec:
type: ClusterIP
ports:
- protocol: TCP
port: 8000
targetPort: frontend
selector:
app: frontend
I have Ingress Controller (traefic) that routes traffic to cluster:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: traefik-ingress
namespace: my-namespace
annotations:
kubernetes.io/ingress.class: traefik
ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: my-dns.ca
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 8000
I changed the code in html in this way:
var ws = new WebSocket('ws://app-service.my-namespace.svc.cluster.local:8001');
but get ````Socket is closed. Reconnect will be attempted in 5 seconds... ```
How to get connection between services or debug connection?

Related

Azure AKS Application Gateway 502 bad gateway

I have been following the tutorial here:
MS Azure
This is fine. However deploying a local config file I get a "502 Gate Way" error. This config has been fine and works as expected.
Can anyone see anything obvious with this: At this point I don't know where to start.
I am trying to achieve using the ingress controller that is Application gateway. Then add deployments and apply additional ingress rules
apiVersion: apps/v1
kind: Deployment
metadata:
name: one-api
namespace: default
annotations:
imageregistry: "gcr.io/google-samples/hello-app:1.0"
spec:
replicas: 3
selector:
matchLabels:
run: one-api
template:
metadata:
labels:
run: one-api
spec:
containers:
- image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
name: one-api
ports:
- containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: one-api
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: one-api
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: two-api
namespace: default
annotations:
imageregistry: "gcr.io/google-samples/hello-app:1.0"
spec:
replicas: 3
selector:
matchLabels:
run: two-api
template:
metadata:
labels:
run: two-api
spec:
containers:
- image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
name: two-api
ports:
- containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: two-api
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: two-api
type: NodePort
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: strata-2022
labels:
app: my-docker-apps
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: one-api
port:
number: 80
- path: /two-api
pathType: Prefix
backend:
service:
name: two-api
port:
number: 80
Output of: kubectl describe ingress strata-2022
Name: strata-2022
Labels: app=my-docker-apps
Namespace: default
Address: 51.142.191.83
Ingress Class:
Default backend:
Rules:
Host Path Backends
/ one-api:80 (10.224.0.15:80,10.224.0.59:80,10.224.0.94:80)
/two-api two-api:80 (10.224.0.13:80,10.224.0.51:80,10.224.0.82:80)
Annotations: kubernetes.io/ingress.class: azure/application-gateway
Events:
kubectl describe ingress
Name: strata-2022
Labels: app=my-docker-apps
Namespace: default
Address: 51.142.191.83
Ingress Class: <none>
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
*
/ one-api:80 (10.224.0.15:80,10.224.0.59:80,10.224.0.94:80)
/two-api two-api:80 (10.224.0.13:80,10.224.0.51:80,10.224.0.82:80)
Annotations: kubernetes.io/ingress.class: azure/application-gateway
Events: <none>
Commands used to create AKS using Azure CLI.
az aks create -n myCluster -g david-tutorial --network-plugin azure --enable-managed-identity -a ingress-appgw --appgw-name testApplicationGateway --appgw-subnet-cidr "10.225.0.0/16" --generate-ssh-keys
// Get credentials and switch to this context
az aks get-credentials -n myCluster -g david-tutorial
// This line is from the tutorial -- this works as expected
//kubectl apply -f https://raw.githubusercontent.com/Azure/application-gateway-kubernetes-ingress/master/docs/examples/aspnetapp.yaml
// This is what i ran. It works locally
kubectl apply -f nano new-deploy.yaml
// Get address
kubectl get ingress
kubectl get configmap
I tried recreating the same setup on my end, and I could identify the following issue right after running the same az aks create command: All the instances in one or more of your backend pools are unhealthy.
Since this appeared to indicate that the backend pools are unreachable, it was strange at first so I tried to look at the logs of one of the pods based on the hello-app images you were using and noticed this right away:
> kubectl logs one-api-77f9b4b9f-6sv6f
2022/08/12 00:22:04 Server listening on port 8080
Hence, my immediate thought was that maybe in the Docker image that you are using, nothing is configured to listen on port 80, which is the port you are using in your kubernetes resources definition.
After updating your Deployment and Service definitions to use port 8080 instead of 80, everything worked perfectly fine and I started getting the following response in my browser:
Hello, world!
Version: 1.0.0
Hostname: one-api-d486fbfd7-pm8kt
Below you can find the updated YAML file that I used to successfully deploy all the resources:
apiVersion: apps/v1
kind: Deployment
metadata:
name: one-api
namespace: default
annotations:
imageregistry: "gcr.io/google-samples/hello-app:1.0"
spec:
replicas: 3
selector:
matchLabels:
run: one-api
template:
metadata:
labels:
run: one-api
spec:
containers:
- image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
name: one-api
ports:
- containerPort: 8080
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: one-api
namespace: default
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
run: one-api
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: two-api
namespace: default
annotations:
imageregistry: "gcr.io/google-samples/hello-app:1.0"
spec:
replicas: 3
selector:
matchLabels:
run: two-api
template:
metadata:
labels:
run: two-api
spec:
containers:
- image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
name: two-api
ports:
- containerPort: 8080
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: two-api
namespace: default
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
run: two-api
type: NodePort
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: strata-2022
labels:
app: my-docker-apps
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: one-api
port:
number: 8080
- path: /two-api
pathType: Prefix
backend:
service:
name: two-api
port:
number: 8080

I am not able to access react-flask api on localhost using kubernetes

Flask deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: flask-api
spec:
replicas: 1
selector:
matchLabels:
app: flask-api
template:
metadata:
labels:
app: flask-api
spec:
containers:
- name: flask-api-container
image: umarrafaqat/flask-app:latest
imagePullPolicy: Always
ports:
- containerPort: 5000
protocol: TCP
----
apiVersion: v1
kind: Service
metadata:
name: flask-app-service
spec:
type: ClusterIP
ports:
- port: 5000
selector:
app: flask-api
React deployments
apiVersion: apps/v1
kind: Deployment
metadata:
name: react-app
spec:
replicas: 1
selector:
matchLabels:
app: react-app
template:
metadata:
labels:
app: react-app
spec:
containers:
- name: react-app-container
image: umarrafaqat/react-app:latest
imagePullPolicy: Always
ports:
- containerPort: 3000
protocol: TCP
----
apiVersion: v1
kind: Service
metadata:
name: react-app-service
spec:
ports:
- port: 3000
selector:
app: react-app
Ingresss
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: react-app-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
rules:
- host: localhost
http:
paths:
- backend:
service:
name: react-app-service
port:
number: 3000
path: /
pathType: Prefix
I want to access this app on a local host but cannot do so. I am running it on minikube

Kubernetes ingress nginx "not found" (les jackson tutorial)

I'm following the tutorial from Less Jackson about Kubernetes but I'm stuck around 04:40:00. I always get an 404 returned from my Ingress Nginx Controller. I followed everything he does, but I can't get it to work.
I also read that this could have something to do with IIS, so I stopped the default website which also runs on port 80.
The apps running in the containers are .NET Core.
Commands-deply & cluster ip
apiVersion: apps/v1
kind: Deployment
metadata:
name: commands-depl
spec:
replicas: 1
selector:
matchLabels:
app: commandservice
template:
metadata:
labels:
app: commandservice
spec:
containers:
- name: commandservice
image: maartenvissershub/commandservice:latest
---
apiVersion: v1
kind: Service
metadata:
name: commands-clusterip-srv
spec:
type: ClusterIP
selector:
app: commandservice
ports:
- name: commandservice
protocol: TCP
port: 80
targetPort: 80
Platforms-depl & cluster ip
apiVersion: apps/v1
kind: Deployment
metadata:
name: platforms-depl
spec:
replicas: 1
selector:
matchLabels:
app: platformservice
template:
metadata:
labels:
app: platformservice
spec:
containers:
- name: platformservice
image: maartenvissershub/platformservice:latest
---
apiVersion: v1
kind: Service
metadata:
name: platforms-clusterip-srv
spec:
type: ClusterIP
selector:
app: platformservice
ports:
- name: platformservice
protocol: TCP
port: 80
targetPort: 80
Ingress-srv
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-srv
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: acme.com
http:
paths:
- path: /api/platforms
pathType: Prefix
backend:
service:
name: platforms-clusterip-srv
port:
number: 80
- path: /api/c/platforms
pathType: Prefix
backend:
service:
name: commands-clusterip-srv
port:
number: 80
I also added this to my hosts file:
127.0.0.1 acme.com
And I applied this from the nginx documentation:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.0/deploy/static/provider/cloud/deploy.yaml
kubectl get ingress
kubectl describe ing ingress-srv
Dockerfile CommandService
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build-env
WORKDIR /app
COPY *.csproj ./
RUN dotnet restore
COPY . ./
RUN dotnet publish -c Release -o out
FROM mcr.microsoft.com/dotnet/aspnet:5.0
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT [ "dotnet", "PlatformService.dll" ]
kubectl logs ingress-nginx-controller-6bf7bc7f94-v2jnp -n ingress-nginx
Am I missing something?
I found my solution. There was a process running on port 80 with pid 4: 0.0.0.0:80. I could stop it using NET stop HTTP in an admin cmd.
I noticed that running kubectl get services -n=ingress-nginx resulted a ingress-nginx-controll, which is fine, but with an external-ip . Running kubectl get ingress also didn't show an ADDRESS. Now they both show "localhost" as value for external-ip and ADDRESS.
Reference: Port 80 is being used by SYSTEM (PID 4), what is that?
So this can occur from several reasons:
Pods or containers are not working - try using kubectl get pods -n <your namespace> to see if any are not in 'running' status.
Assuming they are running, try kubectl describe pod <pod name> -n <your namespace> to see the events on your pod just to make sure its running properly.
I have noticed you are not exposing ports in your deployment. please update your deployments like so:
apiVersion: apps/v1
kind: Deployment
metadata:
name: platforms-depl
spec:
replicas: 1
selector:
matchLabels:
app: platformservice
template:
metadata:
labels:
app: platformservice
spec:
containers:
- name: platformservice
image: maartenvissershub/platformservice:latest
ports:
- name: http
containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: platforms-clusterip-srv
spec:
type: ClusterIP
selector:
app: platformservice
ports:
- name: platformservice
protocol: TCP
port: 80
targetPort: 80
apiVersion: apps/v1
kind: Deployment
metadata:
name: commands-depl
spec:
replicas: 1
selector:
matchLabels:
app: commandservice
template:
metadata:
labels:
app: commandservice
spec:
containers:
- name: commandservice
image: maartenvissershub/commandservice:latest
ports:
- name: http
containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: commands-clusterip-srv
spec:
type: ClusterIP
selector:
app: commandservice
ports:
- name: commandservice
protocol: TCP
port: 80
targetPort: 80
Hope this helps!

GCE ingress resource is taking too long to receive an IP address in GKE cluster. What could be the reason?

I am trying to deploy a sample application on GKE cluster. All the resources are getting created successfully except the ingress resource which is taking around 15-20 minutes to receive an ip address. By this time application times out and get in errored state. The ideal time to assign the IP addr is 2-3 minutes. Can anyone help on the issue how to debug it?
This is happening specific to a cluster while the same ingress getting the ip within 2 minutes in other clusters in GKE.
Below the manifest files I am using to deploy app:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deployment
spec:
selector:
matchLabels:
app: hello
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: gcr.io/google-samples/hello-app:2.0
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: hello-service
spec:
type: ClusterIP
selector:
app: hello
ports:
- port: 8000
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: zone-printer-deployment
spec:
selector:
matchLabels:
app: zone-printer
template:
metadata:
labels:
app: zone-printer
spec:
containers:
- name: zone-printer
image: gcr.io/google-samples/zone-printer:0.2
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: zone-printer-service
spec:
type: ClusterIP
selector:
app: zone-printer
ports:
- port: 9000
targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: awesome-ingress
annotations:
kubernetes.io/ingress.class: "gce-internal"
spec:
defaultBackend:
service:
name: hello-service
port:
number: 8000
rules:
- http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: hello-service
port:
number: 8000
- path: /zone-printer
pathType: ImplementationSpecific
backend:
service:
name: zone-printer-service
port:
number: 9000

Could two cluster IP services be connected in Kubernetes?

The situation is that I want to connect two cluster IP services that are inside a tenant which already has Traefik as NodePort so that any of these two services can be a LoadBalancer because the NodePort is used by Traefik.
The two services I am trying to connect work as follows. The first one, which I called "Master", will receive a post from the client with a text and will call the other service, called "slave", which will add some text ("Hola Patri") to the text sent by the client. The two services are flask services defined by the app.py in the Docker image. You can see the app.py for both images below:
master/app.py
from flask import Flask, request
import requests
app = Flask(__name__)
#app.route("/", methods = ['GET', 'POST'])
def put():
if request.method == 'POST':
text = request.get_data()
r = requests.post("http://slave:5001",data=text)
result = r.text
return result
if __name__ == '__main__':
app.run(host="0.0.0.0", port=5000, debug=True)
slave/app.py
from flask import Flask, request
app = Flask(__name__)
#app.route("/", methods = ['GET', 'POST'])
def put():
if request.method == 'POST':
text = request.get_data()
#text = request.data
texto_final = str(text) + 'Hola Patri'
return texto_final
if __name__ == '__main__':
app.run(host="0.0.0.0", port=5001, debug=True)
The configuration of the deployments and the services are defined in two yamls: master_src.yaml and slave_src.yaml.
master_src.yaml
kind: Namespace
apiVersion: v1
metadata:
name: innovation
labels:
name: innovation
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: master
namespace: innovation
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
securityContext:
runAsUser: 1000
fsGroup: 1000
containers:
- name: master
imagePullPolicy: Always
securityContext:
runAsUser: 1000
runAsNonRoot: true
image: reg-dhc.app.corpintra.net/galiani/innovation:mastertest
ports:
- protocol: TCP
containerPort: 5000
imagePullSecrets:
- name: galiani-innovation-pull-secret
---
apiVersion: v1
kind: Service
metadata:
name: master
namespace: innovation
spec:
ports:
- protocol: TCP
port: 5000
targetPort: 5000
selector:
app: myapp
slave_src.yaml
kind: Namespace
apiVersion: v1
metadata:
name: innovation
labels:
name: innovation
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: slave
namespace: innovation
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
securityContext:
runAsUser: 1000
fsGroup: 1000
containers:
- name: slave
imagePullPolicy: Always
securityContext:
runAsUser: 1000
runAsNonRoot: true
image: reg-dhc.app.corpintra.net/galiani/innovation:slavetest
ports:
- protocol: TCP
containerPort: 5001
imagePullSecrets:
- name: galiani-innovation-pull-secret
---
apiVersion: v1
kind: Service
metadata:
name: slave
namespace: innovation
spec:
selector:
app: myapp
ports:
- protocol: TCP
port: 5001
targetPort: 5001
I have also created a network policy to allow the traffic between the two services. The yaml used to define the network policy is the following.
networkpolicy_src.yaml
kind: NetworkPolicy
apiVersion: extensions/v1beta1
metadata:
name: ingress-to-all
namespace: innovation
spec:
podSelector:
matchLabels:
app: myapp
ingress:
- from:
- podSelector:
matchLabels:
app: myapp
ports:
- port: 5000
protocol: TCP
- port: 5001
protocol: TCP
policyTypes:
- Ingress
The connection between the master service and the slave service is not working. I can access to the master and slave independently. Nevertheless, when I try to make a POST to the master (using curl) and it should connect to the slave, I get the following error:
curl: (52) Empty reply from server
Thank you for your help in advance!
For the new question I have regarding the connection using traefik. Here is the yaml of the trafik ingress:
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: ingress-innovation
namespace: innovation
annotations:
traefik.frontend.rule.type: PathPrefixStrip
spec:
rules:
- http:
paths:
- path: /master
backend:
serviceName: master
servicePort: 5000
- path: /slave
backend:
serviceName: slave
servicePort: 5001
I have also corrected the networkpolicy yaml and now it is:
kind: NetworkPolicy
apiVersion: extensions/v1beta1
metadata:
name: master-to-slave
namespace: innovation
spec:
podSelector:
matchLabels:
app: app-slave
ingress:
- ports:
- port: 5000
protocol: TCP
- port: 5001
protocol: TCP
- from:
- namespaceSelector:
matchLabels:
app: app-master
Thanks again for your help!
The problem could be having same label app: myapp for both master and slave. Change the label to app: master for master deployment and service and app: slave for slave deployment and service.