is it possible to use the kubernetes api server's proxy feature to go to a specific port I have open on a service when I have many ports open?
I have looked at the swagger api spec, and there doesnt seem to be any parameter for choosing one of the potentially many ports of a service.
I have this influxdb service:
apiVersion: v1
kind: Service
metadata:
labels:
base_name: influx
name: influx
namespace: test
spec:
clusterIP: 10.3.0.12
ports:
- name: admin-panel
nodePort: 32646
port: 8083
protocol: TCP
targetPort: 8083
- name: api
nodePort: 32613
port: 8086
protocol: TCP
targetPort: 8086
- name: snapshots
nodePort: 30586
port: 8087
protocol: TCP
targetPort: 8087
selector:
base_name: influx
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
and I am trying to access the admin-pannel through the kubernetes api proxy like so:
https://kube-master/api/v1/proxy/namespaces/test/services/influx
which results in a 503 error
{
kind: "Status",
apiVersion: "v1",
metadata: { },
status: "Failure",
message: "no endpoints available for service "influx"",
reason: "ServiceUnavailable",
code: 503
}
You should be able to append either an actual port or a portname at the end.
As an aside, it looks like your master is not firewalled, if this is true I recommend not making it accessible outside the cluster and running kubectl proxy on your localhost, this will create a proxy to the master, then you can hit:
http://localhost:8001/api/v1/proxy/namespaces/test/services/influx:8083/
Related
I'm creating a web application, where I'm using Kubernetes, in my backend application I have a server that listens to socket connections on port 3000, I deployed my application (front and back) and it works fine I can get data by HTTP requests ... now I want to establish a socket connection with my backend application, but I don't know which address and which port I have to use in my frontend application (or which configuration to do), I searched with my few keywords but I can't find a tutorial or documentation for this if anyone has an idea I would be thankful
Each deployment (frontend and backend) should have its own service.
Ingress (web) traffic would be routed to the frontend service:
apiVersion: v1
kind: Service
metadata:
name: frontend-svc
spec:
selector:
app: frontend
ports:
- protocol: TCP
port: 8080
targetPort: 8080
In this example, your frontend application would talk to host: backend-svc on port 6379 for a backend connection.
apiVersion: v1
kind: Service
metadata:
name: backend-svc
spec:
selector:
app: backend
ports:
- protocol: TCP
port: 6379
targetPort: 6379
Example API implementation:
io.adapter(socketRedis({ host: 'backend-svc', port: '6379' }));
So final manifest will be next :
apiVersion: v1
kind: Service
metadata:
name: apiserver-service
labels:
app: apiserver
spec:
selector:
app: apiserver
ports:
- protocol: TCP
port: 8080
targetPort: 8080
nodePort: 30005
type: NodePort
It will work for defining specific targetport
Service is an abstract way to expose your application running on a set of Pods. This one is a manifest for creating a service, here targetPort: 8080 is the pod port. In this manifest there are basically two parts, one is metadata which gives the service name and also give it a label. Then the spec part, which is short form of specification, it basically the specification of the service, here the selector is given, and also the ports are specified here, port represents the service port, targetPort represents the port on which the service will send requests. By nodePort the outside world (from outside the cluster) can communicate to the service, and finally type represents the type of the service. If the type = NodePort then it is basically means from outside the cluster the service will expose a port (nodePort).
apiVersion: v1
kind: Service
metadata:
name: apiserver-service
labels:
app: apiserver
spec:
selector:
app: apiserver
ports:
- protocol: TCP
port: 8080
targetPort: 8080
nodePort: 30005
type: NodePort
The first example in Kubernetes Service documentation Defining a Service contains what you ask, a Service where port: and targetPort: is different.
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
I have deployed RabbitMQ in Kubernetes using a service with the load balancer type. When creating a service, an external IP is created. Could you please tell me if I can bind another deployment to this IP with other ports? Thanks.
It is possible, you just have to create a service with multiple ports, for example:
apiVersion: v1
kind: Service
metadata:
name: service-name
spec:
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443
- name: any other port
port: <port-number>
targetPort: <target-port>
selector:
app: app
type: LoadBalancer
And you will output similar to this:
$kubectl get svc
service-name LoadBalancer <Internal-IP> <External-IP> 80:30870/TCP,443:32602/TCP,<other-port>:32388/TCP
I created nodeport service for httpd pods and cluster IP service for tomcat pods and they're in same namespace behind nginx LB. There is a weird issue with the app when http and tomcat services are not the same type. When I change both to be cluster IP or both to be NodePort then everything works fine...
Traffic flow is like this:
HTTP and HTTPS traffic -> LB -> Ingress -> Httpd -> Tomcat
HTTPS virtual host custom port traffic -> LB -> Tomcat
TCP traffic -> LB -> Tomcat
Is there anything that can cause issues between HTTPD and Tomcat? Even though I can telnet to httpd and tomcat pods from outside but for some reason the app functionality breaks (some static and jsp pages gets processed though).
httpd-service:
apiVersion: v1
kind: Service
metadata:
name: httpd
labels:
app: httpd-service
namespace: test-web-dev
spec:
type: NodePort
selector:
app: httpd
ports:
- name: port-80
port: 80
protocol: TCP
targetPort: 80
- name: port-443
port: 443
protocol: TCP
targetPort: 443
sessionAffinity: "ClientIP"
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
externalTrafficPolicy: Local
tocmat-service:
apiVersion: v1
kind: Service
metadata:
name: tomcat7
namespace: test-web-dev
annotations:
spec:
selector:
app: tomcat7 # Metadata label of the deployemnt pod template or pod metadata label
ports:
- name: port-8080 # Optional when its just only one port
protocol: TCP
port: 8080
targetPort: 8080
- name: port-8262
protocol: TCP
port: 8262
targetPort: 8262
sessionAffinity: "ClientIP"
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
ingress lb:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
1234: "test-web-dev/httpd:1234"
8262: "test-web-dev/tomcat7:8262"
---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
- name: port-1234
port: 1234
protocol: TCP
targetPort: 1234
- name: port-8262
port: 8262
protocol: TCP
targetPort: 8262
Answering my own question.
NodePort services are required when the service needs be exposed outside of the cluster like internet.
ClusterIP services are used when services needs to communicate internally like frontend to backend.
In my case, user needs to connect to both httpd and tomcat (specific app port) from outside as a result both tomcat and httpd has to be nodeport type service. Configuring tomcat has cluster IP will break the app since tomcat app port isn't reachable from internet.
I'm currently serving MQTT messages over WebSocket to js clients. I use RabbitMQ to write messages on queue from a java backend and have them routed to the clients/frontend apps.
I deployed everything on a Kubernetes cluster on Google Cloud Platform and everything works just fine as long as I publish the RabbitMQ pod with a Kubernetes Load Balancer directly to the internet.
apiVersion: v1
kind: Service
metadata:
labels:
app: rabbitmq
name: rabbitmq
spec:
type: LoadBalancer
ports:
- name: http-manager
nodePort: 30019
port: 80
protocol: TCP
targetPort: 15672
- name: mqtt-broker
nodePort: 31571
port: 1883
protocol: TCP
targetPort: 1883
- name: ws-service
nodePort: 32048
port: 15675
protocol: TCP
targetPort: 15675
selector:
app: rabbitmq
I try to replace the Kubernetes Load balancer with a Node port service and expose it through an Ingress and a GCP Balancer but the health probe fails and never recovers.
apiVersion: v1
kind: Service
metadata:
labels:
app: rabbitmq
name: rabbitmq-internal
spec:
ports:
- name: ws-port
port: 15675
protocol: TCP
targetPort: 15675
- name: mamanger-port
port: 15672
protocol: TCP
targetPort: 15672
selector:
app: rabbitmq
sessionAffinity: None
type: NodePort
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: basictest
namespace: default
spec:
rules:
- host: mqtt-host.dom.cloud
http:
paths:
- backend:
serviceName: rabbitmq-internal
servicePort: 15675
path: /ws/*
- backend:
serviceName: rabbitmq-internal
servicePort: 15672
path: /*
The probe is HTTP so I tried to assign a custom TCP probe and even to trick GCP switching with a probe that points to another HTTP port on the same pod, with no success.
I need to user GCP Balancer to have a unified frontend to assign an SSL Certificate for both HTTPS and WSS protocols.