I created nodeport service for httpd pods and cluster IP service for tomcat pods and they're in same namespace behind nginx LB. There is a weird issue with the app when http and tomcat services are not the same type. When I change both to be cluster IP or both to be NodePort then everything works fine...
Traffic flow is like this:
HTTP and HTTPS traffic -> LB -> Ingress -> Httpd -> Tomcat
HTTPS virtual host custom port traffic -> LB -> Tomcat
TCP traffic -> LB -> Tomcat
Is there anything that can cause issues between HTTPD and Tomcat? Even though I can telnet to httpd and tomcat pods from outside but for some reason the app functionality breaks (some static and jsp pages gets processed though).
httpd-service:
apiVersion: v1
kind: Service
metadata:
name: httpd
labels:
app: httpd-service
namespace: test-web-dev
spec:
type: NodePort
selector:
app: httpd
ports:
- name: port-80
port: 80
protocol: TCP
targetPort: 80
- name: port-443
port: 443
protocol: TCP
targetPort: 443
sessionAffinity: "ClientIP"
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
externalTrafficPolicy: Local
tocmat-service:
apiVersion: v1
kind: Service
metadata:
name: tomcat7
namespace: test-web-dev
annotations:
spec:
selector:
app: tomcat7 # Metadata label of the deployemnt pod template or pod metadata label
ports:
- name: port-8080 # Optional when its just only one port
protocol: TCP
port: 8080
targetPort: 8080
- name: port-8262
protocol: TCP
port: 8262
targetPort: 8262
sessionAffinity: "ClientIP"
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
ingress lb:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
1234: "test-web-dev/httpd:1234"
8262: "test-web-dev/tomcat7:8262"
---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
- name: port-1234
port: 1234
protocol: TCP
targetPort: 1234
- name: port-8262
port: 8262
protocol: TCP
targetPort: 8262
Answering my own question.
NodePort services are required when the service needs be exposed outside of the cluster like internet.
ClusterIP services are used when services needs to communicate internally like frontend to backend.
In my case, user needs to connect to both httpd and tomcat (specific app port) from outside as a result both tomcat and httpd has to be nodeport type service. Configuring tomcat has cluster IP will break the app since tomcat app port isn't reachable from internet.
Related
I have deployed a nodejs/remix app to the Digitalocean Kubernetes service.
I want all my subdomains to subroute
so xxx.example.com should be rewritten as example.com/xxx. Is it possible to do with kubernetes load balancer?
My current config is
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: LoadBalancer
selector:
app: my-deployment
ports:
- name: http
protocol: TCP
port: 80
targetPort: 3000
I want to expose k8s api's using a service. My issue is that the api only respond on port 6443 on https. Any attempt on http return status 400 bad request. How can I "force" the service to user https ?
apiVersion: v1
kind: Service
metadata:
name: k8s-api
namespace: kube-system
labels:
label: k8s-api
spec:
ports:
- port: 80 #Port on which your service is running
targetPort: 6443
protocol: TCP
name: http
selector:
name: kube-apiserver-master-node
May be this ?
apiVersion: v1
kind: Service
metadata:
name: k8s-api
namespace: kube-system
labels:
label: k8s-api
spec:
ports:
- port: 443 #Port on which your service is running
targetPort: 6443
protocol: TCP
name: http
selector:
name: kube-apiserver-master-node
If you are using the Nginx ingress by default it does SSL off load and sends plain HTTP in the background.
Changing port 6443 might be helpful if you request direct connecting to the service.
If you are using the Nginx ingress make sure it doesn't terminate SSL.
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
So final manifest will be next :
apiVersion: v1
kind: Service
metadata:
name: apiserver-service
labels:
app: apiserver
spec:
selector:
app: apiserver
ports:
- protocol: TCP
port: 8080
targetPort: 8080
nodePort: 30005
type: NodePort
It will work for defining specific targetport
Service is an abstract way to expose your application running on a set of Pods. This one is a manifest for creating a service, here targetPort: 8080 is the pod port. In this manifest there are basically two parts, one is metadata which gives the service name and also give it a label. Then the spec part, which is short form of specification, it basically the specification of the service, here the selector is given, and also the ports are specified here, port represents the service port, targetPort represents the port on which the service will send requests. By nodePort the outside world (from outside the cluster) can communicate to the service, and finally type represents the type of the service. If the type = NodePort then it is basically means from outside the cluster the service will expose a port (nodePort).
apiVersion: v1
kind: Service
metadata:
name: apiserver-service
labels:
app: apiserver
spec:
selector:
app: apiserver
ports:
- protocol: TCP
port: 8080
targetPort: 8080
nodePort: 30005
type: NodePort
The first example in Kubernetes Service documentation Defining a Service contains what you ask, a Service where port: and targetPort: is different.
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
I want to deploy a custom nginx app on my kubernetes cluster.
I have three raspberry in a cluster. My deplotment file looks as follows
kubepodDeploy.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
run: my-nginx
replicas: 2
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: privateRepo/my-nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
run: my-nginx
spec:
type: NodePort
ports:
- port: 8080
targetPort: 80
protocol: TCP
name: http
- port: 443
protocol: TCP
name: https
selector:
run: my-nginx
How can I deploy it so that I can access my app by ipadress. Which service type do I need?
my service details are:
kubectl describe service my-nginx ~/Project/htmlBasic
Name: my-nginx
Namespace: default
Labels: run=my-nginx
Annotations: Selector: run=my-nginx
Type: NodePort
IP: 10.99.107.194
Port: http 8080/TCP
TargetPort: 80/TCP
NodePort: http 30488/TCP
Endpoints: 10.32.0.4:80,10.32.0.5:80
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 32430/TCP
Endpoints: 10.32.0.4:443,10.32.0.5:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
You can not access the application on ipaddress:8080 without using a proxy server in front or changing IPtable rules(not good idea). NodePort service type will always expose service in port range 30000-32767
So at any point, your service will be running on ipaddress:some_higher_port
Running a proxy in front which, redirects the traffic to node port and since 8080 is your requirement so run proxy server also on 8080 port.
Just to add proxy server will not be part of Kubernetes cluster
If you are on cloud consider using LoadBalancer service and access you app on some DNS name.
I'm currently serving MQTT messages over WebSocket to js clients. I use RabbitMQ to write messages on queue from a java backend and have them routed to the clients/frontend apps.
I deployed everything on a Kubernetes cluster on Google Cloud Platform and everything works just fine as long as I publish the RabbitMQ pod with a Kubernetes Load Balancer directly to the internet.
apiVersion: v1
kind: Service
metadata:
labels:
app: rabbitmq
name: rabbitmq
spec:
type: LoadBalancer
ports:
- name: http-manager
nodePort: 30019
port: 80
protocol: TCP
targetPort: 15672
- name: mqtt-broker
nodePort: 31571
port: 1883
protocol: TCP
targetPort: 1883
- name: ws-service
nodePort: 32048
port: 15675
protocol: TCP
targetPort: 15675
selector:
app: rabbitmq
I try to replace the Kubernetes Load balancer with a Node port service and expose it through an Ingress and a GCP Balancer but the health probe fails and never recovers.
apiVersion: v1
kind: Service
metadata:
labels:
app: rabbitmq
name: rabbitmq-internal
spec:
ports:
- name: ws-port
port: 15675
protocol: TCP
targetPort: 15675
- name: mamanger-port
port: 15672
protocol: TCP
targetPort: 15672
selector:
app: rabbitmq
sessionAffinity: None
type: NodePort
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: basictest
namespace: default
spec:
rules:
- host: mqtt-host.dom.cloud
http:
paths:
- backend:
serviceName: rabbitmq-internal
servicePort: 15675
path: /ws/*
- backend:
serviceName: rabbitmq-internal
servicePort: 15672
path: /*
The probe is HTTP so I tried to assign a custom TCP probe and even to trick GCP switching with a probe that points to another HTTP port on the same pod, with no success.
I need to user GCP Balancer to have a unified frontend to assign an SSL Certificate for both HTTPS and WSS protocols.