I have two k8s deployments in one cluster GKE, for one web application, one is frontend(react) and the other is backend(python), the frontend is working fine but when I try to do something on the frontend that calls the backend i got this , I have ingress for the frontend, which works perfectly, the only thing i can't figure it out is why the frontend can't reach the backend, i want them to communicate via services. I have the following services:
Frontend service
apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
selector:
app: hello
tier: frontend
ports:
- protocol: "TCP"
port: 80
targetPort: 3000
Backend service
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
selector:
app: hello
tier: backend
track: stable
ports:
- protocol: TCP
port: 80
targetPort: 8000
Any fix suggestions?
When the applications are client side applications like react or Angular, the application runs on the client browser; not in the server side. In that case, the application in the client browser must invoke the APIs on the server (backend application).
In such a situation, even though the application is named backend; it must expose the APIs via the Ingress so that those APIs can be used by the frontend application running on the client browser.
Related
I have local and dockerized apps which are working excelent on localhost : java backend at 8080, angular at 4200, activemq at 8161, and postgres on 5432
Now,I am trying also to kubernetize apps to make them work on localhosts.
As far as I know kubernetes provides random Ip on clusters, what should I do do make them work on localhosts to listen to each other ? Is there any way to make them automatically start at those localhosts instead of using port forwariding for each service ?
Every service and deployment has similiar structure :
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
selector:
app: backend
type: LoadBalancer
ports:
- protocol: 8080
port: 8080
targetPort: 8080
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
labels:
app: backend
spec:
replicas: 3
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: backend
image:
ports:
- containerPort: 8080
Tried port-forwarding, works, but requires lot of manual work ( open few new powershell windows and then do manual port forwarding)
In the kubernetes eco system apps talk to each other through their services.
If they are in the same namespace they can directly go to the service name of not they need to specify the full name which includes the namespace name:
my-svc.my-namespace.svc.cluster-domain.example
Never mind, find a way to do it automaticaly with port - forwarding, with simply running 1 script
I have wrote a .bat script with these steps:
kubernetes run all deployments file
kubernetes run all services file
15 second timeout to give time to change pod state from pending to running
{ do port forwarding for each service. Every forwarding is in new powershell windows without exiting }
I'm creating a web application, where I'm using Kubernetes, in my backend application I have a server that listens to socket connections on port 3000, I deployed my application (front and back) and it works fine I can get data by HTTP requests ... now I want to establish a socket connection with my backend application, but I don't know which address and which port I have to use in my frontend application (or which configuration to do), I searched with my few keywords but I can't find a tutorial or documentation for this if anyone has an idea I would be thankful
Each deployment (frontend and backend) should have its own service.
Ingress (web) traffic would be routed to the frontend service:
apiVersion: v1
kind: Service
metadata:
name: frontend-svc
spec:
selector:
app: frontend
ports:
- protocol: TCP
port: 8080
targetPort: 8080
In this example, your frontend application would talk to host: backend-svc on port 6379 for a backend connection.
apiVersion: v1
kind: Service
metadata:
name: backend-svc
spec:
selector:
app: backend
ports:
- protocol: TCP
port: 6379
targetPort: 6379
Example API implementation:
io.adapter(socketRedis({ host: 'backend-svc', port: '6379' }));
spring app + swagger (works on 8080) are working successfully in my local, but when i deploy app on kubernetes, the port information is missing and it calls host without port. so request fails.
in local, these informations are shown in UI like below
[ Base URL: localhost:8080/customerservice/api ]
http://localhost:8080/customerservice/api/v2/api-docs
in kubernetes deployed,
[ Base URL: kubernetes-ip/customerservice/api ]
http://kubernetes-ip:32004/customerservice/api/v2/api-docs
but base url must be kubernetes-ip:32004/customerservice/api
i have created kubernetes service (nodeport) to access deployment and 32004 is service port.
kind: Service
apiVersion: v1
metadata:
name: customer-service
namespace: altyapi
spec:
ports:
- protocol: TCP
port: 80
targetPort: 8080
nodePort: 32004
selector:
app: customer
type: NodePort
where am i missing here? thanks for helps and suggestions.
I have a legacy application we've started running in Kubernetes. The application listens on two different ports, one for the general web page and another for a web service. In the long run we may try to change some of this but for the moment we're trying to get the legacy application to run as is. The current configuration has a single service for both ports:
apiVersion: v1
kind: Service
metadata:
name: app
spec:
selector:
app: my-app
ports:
- name: web
port: 8080
protocol: TCP
targetPort: 8080
- name: service
port: 8081
protocol: TCP
targetPort: 8081
Then I'm using a single ingress to route traffic to the correct service port based on path:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app
annotations:
nginx.ingress.kubernetes.io/upstream-hash-by: "$remote_addr"
spec:
rules:
- host: myapp.test.com
http:
paths:
- backend:
serviceName: app
servicePort: 8080
path: /app
- backend:
serviceName: app
servicePort: 8081
path: /service
This works great for routing. Requests coming into the ingress get routed to the correct service port based on path. However, the problem I have is that for this legacy app to work the requests to both ports 8080 and 8081 need to be routed to the same pod for each client. You can see I tried adding the upstream-hash-by annotation. This seemed to ensure that all requests to 8080 from one client went to the same pod and all requests to 8081 from one client went to the same pod but not that those are the same pod for any one client. When I run with a single pod instance everything is great but when I start spinning up additional pods some clients get /app requests routed to one pod and /service requests to another and in this application that does not currently work. I have tried other annotations in the ingress including nginx.ingress.kubernetes.io/affinity: "cookie" and nginx.ingress.kubernetes.io/affinity-mode: "persistent" as well as trying to add sessionAffinity: ClientIP to the service but so far nothing seems to work. The goal is that all requests to either path get routed to the same pod for any one client. Any help would be greatly appreciated.
Session persistence settings will only work if you set the kube proxy settings such that it forwards requests to local pod only and not to random pods across the cluster.
you can do this by setting the service level settings to:
service.spec.externalTrafficPolicy: Local
you can read more here:
https://kubernetes.io/docs/tutorials/services/source-ip/
after doing this , you ingress annotations should work. I have tested this with external load balancer only , not with ingress though.
keeping everything else the same and having this service definition should work
apiVersion: v1
kind: Service
metadata:
name: app
spec:
externalTrafficPolicy: Local
selector:
app: my-app
ports:
- name: web
port: 8080
protocol: TCP
targetPort: 8080
- name: service
port: 8081
protocol: TCP
targetPort: 8081
I'm trying to expose my Deployment to a port which I can access through my local computer via Minikube.
I have tried two YAML configurations (one a load balancer, one just a service exposing a port).
I: http://pastebin.com/gL5ZBZg7
apiVersion: v1
kind: Service
metadata:
name: bot
labels:
app: bot
spec:
type: LoadBalancer
ports:
- port: 8000
targetPort: 8000
protocol: TCP
selector:
app: bot
II: http://pastebin.com/sSuyhzC5
apiVersion: v1
kind: Service
metadata:
name: bot
labels:
app: bot
spec:
ports:
- port: 8000
targetPort: 8000
protocol: TCP
selector:
app: bot
The deployment and the docker container image both expose port 8000, and the Pod is tagged with app:bot.
The first results in a service with a port which never finishes, and the external IP never gets assigned.
The second results in a port of bot:8000 TCP, bot:0 TCP in my dashboard and when I try "minikube service bot" nothing happens. The same happens if I type in "kubectl expose service bot".
I am on Mac OS X.
How can I set this up properly?
The LoadBalancer service is meant for Cloud providers and not really relevant for minikube.
From the documentation:
On cloud providers which support external load balancers, setting the type field to "LoadBalancer" will provision a load balancer for your Service.
Using a Service of type NodePort (see documentation) as mentioned in the Networking part of the minikube documentation is the intended way of exposing services on minikube.
So your configuration should look like this:
apiVersion: v1
kind: Service
metadata:
name: bot
labels:
app: bot
spec:
type: NodePort
ports:
- port: 8000
targetPort: 8000
nodePort: 30356
protocol: TCP
selector:
app: bot
And access your application through:
> IP=$(minikube ip)
> curl "http://$IP:30356"
Hope that helps.
Minikube now has the service command to access a service.
Use kubectl service <myservice>.
That will give you a URL which you can use to talk to the service.