I'm creating a web application, where I'm using Kubernetes, in my backend application I have a server that listens to socket connections on port 3000, I deployed my application (front and back) and it works fine I can get data by HTTP requests ... now I want to establish a socket connection with my backend application, but I don't know which address and which port I have to use in my frontend application (or which configuration to do), I searched with my few keywords but I can't find a tutorial or documentation for this if anyone has an idea I would be thankful
Each deployment (frontend and backend) should have its own service.
Ingress (web) traffic would be routed to the frontend service:
apiVersion: v1
kind: Service
metadata:
name: frontend-svc
spec:
selector:
app: frontend
ports:
- protocol: TCP
port: 8080
targetPort: 8080
In this example, your frontend application would talk to host: backend-svc on port 6379 for a backend connection.
apiVersion: v1
kind: Service
metadata:
name: backend-svc
spec:
selector:
app: backend
ports:
- protocol: TCP
port: 6379
targetPort: 6379
Example API implementation:
io.adapter(socketRedis({ host: 'backend-svc', port: '6379' }));
Related
I want to expose k8s api's using a service. My issue is that the api only respond on port 6443 on https. Any attempt on http return status 400 bad request. How can I "force" the service to user https ?
apiVersion: v1
kind: Service
metadata:
name: k8s-api
namespace: kube-system
labels:
label: k8s-api
spec:
ports:
- port: 80 #Port on which your service is running
targetPort: 6443
protocol: TCP
name: http
selector:
name: kube-apiserver-master-node
May be this ?
apiVersion: v1
kind: Service
metadata:
name: k8s-api
namespace: kube-system
labels:
label: k8s-api
spec:
ports:
- port: 443 #Port on which your service is running
targetPort: 6443
protocol: TCP
name: http
selector:
name: kube-apiserver-master-node
If you are using the Nginx ingress by default it does SSL off load and sends plain HTTP in the background.
Changing port 6443 might be helpful if you request direct connecting to the service.
If you are using the Nginx ingress make sure it doesn't terminate SSL.
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
I have a legacy application we've started running in Kubernetes. The application listens on two different ports, one for the general web page and another for a web service. In the long run we may try to change some of this but for the moment we're trying to get the legacy application to run as is. The current configuration has a single service for both ports:
apiVersion: v1
kind: Service
metadata:
name: app
spec:
selector:
app: my-app
ports:
- name: web
port: 8080
protocol: TCP
targetPort: 8080
- name: service
port: 8081
protocol: TCP
targetPort: 8081
Then I'm using a single ingress to route traffic to the correct service port based on path:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app
annotations:
nginx.ingress.kubernetes.io/upstream-hash-by: "$remote_addr"
spec:
rules:
- host: myapp.test.com
http:
paths:
- backend:
serviceName: app
servicePort: 8080
path: /app
- backend:
serviceName: app
servicePort: 8081
path: /service
This works great for routing. Requests coming into the ingress get routed to the correct service port based on path. However, the problem I have is that for this legacy app to work the requests to both ports 8080 and 8081 need to be routed to the same pod for each client. You can see I tried adding the upstream-hash-by annotation. This seemed to ensure that all requests to 8080 from one client went to the same pod and all requests to 8081 from one client went to the same pod but not that those are the same pod for any one client. When I run with a single pod instance everything is great but when I start spinning up additional pods some clients get /app requests routed to one pod and /service requests to another and in this application that does not currently work. I have tried other annotations in the ingress including nginx.ingress.kubernetes.io/affinity: "cookie" and nginx.ingress.kubernetes.io/affinity-mode: "persistent" as well as trying to add sessionAffinity: ClientIP to the service but so far nothing seems to work. The goal is that all requests to either path get routed to the same pod for any one client. Any help would be greatly appreciated.
Session persistence settings will only work if you set the kube proxy settings such that it forwards requests to local pod only and not to random pods across the cluster.
you can do this by setting the service level settings to:
service.spec.externalTrafficPolicy: Local
you can read more here:
https://kubernetes.io/docs/tutorials/services/source-ip/
after doing this , you ingress annotations should work. I have tested this with external load balancer only , not with ingress though.
keeping everything else the same and having this service definition should work
apiVersion: v1
kind: Service
metadata:
name: app
spec:
externalTrafficPolicy: Local
selector:
app: my-app
ports:
- name: web
port: 8080
protocol: TCP
targetPort: 8080
- name: service
port: 8081
protocol: TCP
targetPort: 8081
I'm currently serving MQTT messages over WebSocket to js clients. I use RabbitMQ to write messages on queue from a java backend and have them routed to the clients/frontend apps.
I deployed everything on a Kubernetes cluster on Google Cloud Platform and everything works just fine as long as I publish the RabbitMQ pod with a Kubernetes Load Balancer directly to the internet.
apiVersion: v1
kind: Service
metadata:
labels:
app: rabbitmq
name: rabbitmq
spec:
type: LoadBalancer
ports:
- name: http-manager
nodePort: 30019
port: 80
protocol: TCP
targetPort: 15672
- name: mqtt-broker
nodePort: 31571
port: 1883
protocol: TCP
targetPort: 1883
- name: ws-service
nodePort: 32048
port: 15675
protocol: TCP
targetPort: 15675
selector:
app: rabbitmq
I try to replace the Kubernetes Load balancer with a Node port service and expose it through an Ingress and a GCP Balancer but the health probe fails and never recovers.
apiVersion: v1
kind: Service
metadata:
labels:
app: rabbitmq
name: rabbitmq-internal
spec:
ports:
- name: ws-port
port: 15675
protocol: TCP
targetPort: 15675
- name: mamanger-port
port: 15672
protocol: TCP
targetPort: 15672
selector:
app: rabbitmq
sessionAffinity: None
type: NodePort
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: basictest
namespace: default
spec:
rules:
- host: mqtt-host.dom.cloud
http:
paths:
- backend:
serviceName: rabbitmq-internal
servicePort: 15675
path: /ws/*
- backend:
serviceName: rabbitmq-internal
servicePort: 15672
path: /*
The probe is HTTP so I tried to assign a custom TCP probe and even to trick GCP switching with a probe that points to another HTTP port on the same pod, with no success.
I need to user GCP Balancer to have a unified frontend to assign an SSL Certificate for both HTTPS and WSS protocols.
I have a container which exposes multiple ports. So, the kubernetes service configured for the deployment looks like the following:
kind: Service
apiVersion: v1
metadata:
name: myapp
labels:
app: myapp
spec:
selector:
name: myapp
ports:
- protocol: TCP
port: 5555
targetPort: 5555
- protocol: TCP
port: 5556
targetPort: 5556
I use Istio to manage routing and to expose this service via istio ingress gateway.
We have one gateway for port 80, do we have to create two different gateways for the same host with two different virtual service?
I want to configure that "example.myhost.com" 's 80 route to 5556 and some other port, let say, "example.myhost.com" 's 8088 route to 5555 of the service.
Is that possible with one virtualservice?
Assuming that Istio Gateway is serving TCP network connections, you might be able to combine one Gateway configuration for two external ports 80 and 5556:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: myapp-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: port1
protocol: TCP
hosts:
- example.myhost.com
- port:
number: 8088
name: port2
protocol: TCP
hosts:
- example.myhost.com
Field hosts identifies here a list of target addresses that have to be exposed by this Gateway.
In order to make appropriate network routing to the nested Pods, you can specify VirtualService with the matching set for the ports:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: myapp-virtual-service
spec:
hosts:
- example.myhost.com
gateways:
- myapp-gateway
tcp:
- match:
- port: 80
route:
- destination:
host: myapp.prod.svc.cluster.local
port:
number: 5556
- match:
- port: 8088
route:
- destination:
host: myapp.prod.svc.cluster.local
port:
number: 5555
Above VirtualService defines the rules to route network traffic coming on 80 and 8088 ports for example.myhost.com to the myapp service ports 5556, 5555 respectively.
I encourage you to get more information about Istio TCPRoute capabilities and further appliance.
is it possible to use the kubernetes api server's proxy feature to go to a specific port I have open on a service when I have many ports open?
I have looked at the swagger api spec, and there doesnt seem to be any parameter for choosing one of the potentially many ports of a service.
I have this influxdb service:
apiVersion: v1
kind: Service
metadata:
labels:
base_name: influx
name: influx
namespace: test
spec:
clusterIP: 10.3.0.12
ports:
- name: admin-panel
nodePort: 32646
port: 8083
protocol: TCP
targetPort: 8083
- name: api
nodePort: 32613
port: 8086
protocol: TCP
targetPort: 8086
- name: snapshots
nodePort: 30586
port: 8087
protocol: TCP
targetPort: 8087
selector:
base_name: influx
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
and I am trying to access the admin-pannel through the kubernetes api proxy like so:
https://kube-master/api/v1/proxy/namespaces/test/services/influx
which results in a 503 error
{
kind: "Status",
apiVersion: "v1",
metadata: { },
status: "Failure",
message: "no endpoints available for service "influx"",
reason: "ServiceUnavailable",
code: 503
}
You should be able to append either an actual port or a portname at the end.
As an aside, it looks like your master is not firewalled, if this is true I recommend not making it accessible outside the cluster and running kubectl proxy on your localhost, this will create a proxy to the master, then you can hit:
http://localhost:8001/api/v1/proxy/namespaces/test/services/influx:8083/