Openshift - accessing non http port - kubernetes

I have a fairly simple (SpringBoot) app that listens on the following port:
8080 - for HTTP (swagger page)
1141 - non HTTP traffic. It is for FIX (https://en.wikipedia.org/wiki/Financial_Information_eXchange) port. i.e. direct socket to socket, TCP/IP port. The FIX engine used is QuickfixJ.
I'm trying to deploy this app on OpenShift cluster. The configuration looks like below:
Here are the YAMLs I have:
Deployment config:
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
labels:
app: pricing-sim-depl
name: pricing-sim-deployment
namespace: my-namespace
spec:
replicas: 1
selector:
app: pricing-sim-depl
strategy:
resources:
limits:
cpu: 200m
memory: 1024Mi
requests:
cpu: 100m
memory: 512Mi
type: Recreate
template:
metadata:
labels:
app: pricing-sim-depl
spec:
containers:
- image: >-
my-docker-registry/alex/pricing-sim:latest
name: pricing-sim-pod
ports:
- containerPort: 8080
protocol: TCP
- containerPort: 1141
protocol: TCP
tty: true
resources:
limits:
cpu: 200m
memory: 1024Mi
requests:
cpu: 100m
memory: 512Mi
Then I created a ClusterIP service for accessing the HTTP Swagger page:
apiVersion: v1
kind: Service
metadata:
labels:
app: pricing-sim-sv
name: pricing-sim-service
namespace: my-namespace
spec:
ports:
- name: swagger-port
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: pricing-sim-depl
type: ClusterIP
and also the Router for accessing it:
apiVersion: route.openshift.io/v1
kind: Route
metadata:
labels:
app: pricing-sim-tn-swagger
name: pricing-sim-tunnel-swagger
namespace: my-namespace
spec:
host: pricing-sim-swagger-my-namespace.apps.cpaas.service.test
port:
targetPort: swagger-port
to:
kind: Service
name: pricing-sim-service
weight: 100
wildcardPolicy: None
The last component is a NodePort service to access the FIX port:
apiVersion: v1
kind: Service
metadata:
labels:
app: pricing-sim-esp-service
name: pricing-sim-esp-service
namespace: my-namespace
spec:
type: NodePort
ports:
- port: 1141
protocol: TCP
targetPort: 1141
nodePort: 30005
selector:
app: pricing-sim-depl
So far, the ClusterIP & Router works fine. I can access the swagger page at
http://fxc-fix-engine-swagger-my-namespace.apps.cpaas.service.test
However, I'm not sure how I can access the FIX port (defined by NodePort service above). First, I cant use Router - as it is not a HTTP endpoint (and thats why I defined it as NodePort).
Looking at OpenShift page, I can see the following for 'pricing-sim-esp-service':
Selectors:
app=pricing-sim-depl
Type: NodePort
IP: 172.30.11.238
Hostname: pricing-sim-esp-service.my-namespace.svc.cluster.local
Session affinity: None
Traffic (one row)
Route/Node Port: 30005
Service Port: 1141/TCP
Target Port: 1141
Hostname: none
TLS Termination: none
BTW.. i'm following the suggestion on this StackOverflow post: OpenShift :: How do we enable traffic into pod on a custom port (non-web / non-http)
I've also tried using LoadBalancer service type. Which actually gives external IP on the service page above. But that 'external IP' doesnt seem to be accessible from my local PC either.
The version of openshift we are running is:
OpenShift Master: v3.11.374
Kubernetes Master: v1.11.0+d4cacc0
OpenShift Web Console: 3.11.374-1-3365aaf
Thank you in advance!

Related

Error syncing load balancer: failed to ensure load balancer: failed to build load-balancer

We are only trying out the Kubernetes setup and strictly following the docs (at this point).
We are on DigitalOcean and there is a bunch of tutorials and docs related to it as well (added all of these below for a reference).
At this point, I managed to deploy the two pods and now trying to configure the load balancer for them in the simplest way possible. Everything is getting deployed, but load balancer is failing to be initialized with the following error:
Error syncing load balancer: failed to ensure load balancer: failed to build load-balancer request: specified health check port 8080 does not exist on service default/https-with-cert
I verified that the health check is actually working on the pods if I ping them directly. In fact, this is the same health check that we are using for the last 2 years in manually setup infrastructure.
The build is running through github actions and everything is passing without issues:
where deployment.yml looks like this:
---
kind: Service
apiVersion: v1
metadata:
name: https-with-cert
annotations:
service.beta.kubernetes.io/do-loadbalancer-protocol: "http"
service.beta.kubernetes.io/do-loadbalancer-algorithm: "round_robin"
service.beta.kubernetes.io/do-loadbalancer-tls-ports: "443"
service.beta.kubernetes.io/do-loadbalancer-certificate-id: "c1eae56c-42cd-4953-9ab9-1a6facae87f8"
# "api.priz.guru" should be configured to point at the IP address of the DO load-balancer
service.beta.kubernetes.io/do-loadbalancer-hostname: "api.priz.guru"
service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"
service.beta.kubernetes.io/do-loadbalancer-disable-lets-encrypt-dns-records: "false"
service.beta.kubernetes.io/do-loadbalancer-size-unit: "2"
service.beta.kubernetes.io/do-loadbalancer-healthcheck-port: "8080"
service.beta.kubernetes.io/do-loadbalancer-healthcheck-protocol: "http"
service.beta.kubernetes.io/do-loadbalancer-healthcheck-path: "/v1/ping"
spec:
type: LoadBalancer
selector:
app: priz-api
ports:
- name: http
protocol: TCP
port: 80
targetPort: 8080
- name: https
protocol: TCP
port: 443
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: priz-api
labels:
app: priz-api
spec:
# modify replicas according to your case
replicas: 2
strategy:
type: RollingUpdate
selector:
matchLabels:
app: priz-api
template:
metadata:
labels:
app: priz-api
spec:
containers:
- name: priz-api
image: <IMAGE>
env:
- name: PRIZ_DATABASE_URL
value: "${PRIZ_DATABASE_URL_PROD}"
- name: PRIZ_DATABASE_USER
value: "${PRIZ_DATABASE_USER_PROD}"
- name: PRIZ_DATABASE_PASSWORD
value: "${PRIZ_DATABASE_PASSWORD_PROD}"
- name: PRIZ_AUTH0_DOMAIN
value: "${PRIZ_AUTH0_DOMAIN_PROD}"
- name: PRIZ_AUTH0_API_DOMAIN
value: "${PRIZ_AUTH0_API_DOMAIN_PROD}"
- name: PRIZ_AUTH0_API_CLIENT_ID
value: "${PRIZ_AUTH0_API_CLIENT_ID_PROD}"
- name: PRIZ_AUTH0_API_CLIENT_SECRET
value: "${PRIZ_AUTH0_API_CLIENT_SECRET_PROD}"
- name: PRIZ_APP_BASE_URL
value: "${PRIZ_APP_BASE_URL_PROD}"
- name: PRIZ_STRIPE_API_KEY_SECRET
value: "${PRIZ_STRIPE_API_KEY_SECRET_PROD}"
- name: PRIZ_SEARCH_HOST
value: "${PRIZ_SEARCH_HOST_PROD}"
ports:
- containerPort: 8080
resources:
requests:
cpu: 500m
memory: 500Mi
limits:
cpu: 2000m
memory: 2000Mi
How do I even debug this issue? What is missing?
Some references that we used:
https://docs.digitalocean.com/products/kubernetes/how-to/add-load-balancers/
https://docs.digitalocean.com/products/kubernetes/how-to/configure-load-balancers/
https://github.com/digitalocean/digitalocean-cloud-controller-manager/tree/master/docs/controllers/services/examples

Kubernetes is always forwarding the request to same pod

I have a Kubernetes cluster with 1 control-plane and 1 worker, the worker has in it 3 pods. The pods and service with Type: NodePort are on the same node. I was expecting the service to load balance the requests between the pods but looks like all the requests are always getting forwarded to only one pod.
apiVersion: v1
kind: Service
metadata:
name: web-svc
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 30002
selector:
app: web
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
labels:
app: web
spec:
selector:
matchLabels:
app: web
replicas: 3
template:
metadata:
labels:
app: web
spec:
containers:
- name: web-app
image: webimage
ports:
- containerPort: 80
imagePullPolicy: Never
resources:
limits:
cpu: "0.5"
requests:
cpu: "0.5"
~
This is expected behavior if your requests have persistent TCP connection. Try adding "connection":"close" in your HTTP header.

Kubernetes: The service manifest doesn't provide an endpoint to access the application

This yaml tries to deploy a simple Arangodb architecture in k8s, I know there are operators for ArangoDB, but it is a simple PoC to understand k8s pieces and iterate this db with other apps.
The problem is this YAML file executes correctly but I don't get any IP:PORT to connect, however when I execute that docker image in local it works.
# create: kubectl apply -f ./arango.yaml
# delete: kubectl delete -f ./arango.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: nms
name: arangodb-deployment
spec:
replicas: 1
selector:
matchLabels:
app: arangodb-pod
template:
metadata:
labels:
app: arangodb-pod
spec:
containers:
- name: arangodb
image: arangodb/arangodb:3.5.3
env:
- name: ARANGO_ROOT_PASSWORD
value: "pass"
ports:
- name: http
containerPort: 8529
protocol: TCP
resources:
limits:
cpu: 100m
memory: 128Mi
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
namespace: nms
name: arangodb-svc
spec:
type: LoadBalancer
selector:
app: arangodb-pod
ports:
- targetPort: 8529
protocol: TCP
port: 8529
targetPort: http
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: nms
name: arango-storage
labels:
app: arangodb-pod
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
Description seems clear:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
arangodb-svc LoadBalancer 10.0.150.245 51.130.11.13 8529/TCP 14m
I am executing kubectl apply -f arango.yaml from AKS but I cannot access to any IP:8529. Some recommendations?
I would like to simulate these commands:
docker run -p 8529:8529 -e ARANGO_ROOT_PASSWORD=pass -d --name arangodb-instance arangodb/arangodb:3.5.3
docker start arangodb-instance
You must allow the NodePort 31098 at NSG level from your VNet configuration and attach that NSG rule to AKS cluster.
Also please try and update the service manifest with the changes that you went through with the help in comments.
- targetPort: 8529
protocol: TCP
port: 8529
targetPort: http --< **Its completely wrong field, the manifest wont be parsed.**
The above manifest is wrong, for NodePort (--service-node-port-range=30000-32767) the manifest should look something like this:
spec:
type: NodePort
selector:
app: arangodb-pod
ports:
# By default and for convenience, the `targetPort` is set to the same value as the `port` field.
- name: http
port: 8529
targetPort: 8529
# Optional field
nodePort: 31044
You can connect at public-NODE-IP:NodePort from outside AKS.
For service type loadbalancer, your manifest should look like:
spec:
type: LoadBalancer
selector:
app: arangodb-pod
ports:
- name: http
protocol: TCP
port: 8529
targetPort: 8529
For LoadBalancer you can connect with LoadBalancer-External-IP:external-port
However, in both the above cases NSG whitelist rule should be there. You should whitelist your local machine's IP or the IP of the machine from wherever you are accessing it.
you have to ingress controller or you could also go with loadbalancer type as service assiging a static ip which you prefer. Both will work

Cannot acces Kubernetes service outside cluster

I have created a Kubernetes service for my deployment and using a load balancer an external IP has been assigned along with the node port but I am unable to access the service from outside the cluster using the external IP and nodeport.
The service has been properly created and is up and running.
Below is my deployment:-
apiVersion: apps/v1
kind: Deployment
metadata:
name: dev-portal
labels:
app: dev-portal
spec:
replicas: 1
selector:
matchLabels:
app: dev-portal
template:
metadata:
labels:
app: dev-portal
spec:
containers:
- name: dev-portal
image: bhavesh/ti-portal:develop
imagePullPolicy: Always
resources:
requests:
memory: "512Mi"
cpu: "1"
limits:
memory: "1G"
cpu: "1"
ports:
- containerPort: 9000
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
name: dev-portal
labels:
app: dev-portal
spec:
selector:
app: dev-portal
ports:
- protocol: TCP
port: 9000
targetPort: 9000
nodePort: 30429
type: LoadBalancer
For some reason, I am unable to access my service from outside and a message 'Refused to connect' is shown.
Update
The service is described using kubectl describe below:
Name: trakinvest-dev-portal
Namespace: default
Labels: app=trakinvest-dev-portal
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"trakinvest-dev-portal"},"name":"trakinvest-dev-portal","...
Selector: app=trakinvest-dev-portal
Type: LoadBalancer
IP: 10.245.185.62
LoadBalancer Ingress: 139.59.54.108
Port: <unset> 9000/TCP
TargetPort: 9000/TCP
NodePort: <unset> 30429/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>

defining 2 ports in deployment.yaml in Kubernetes

I have a docker image from I am doing
docker run --name test -h test -p 9043:9043 -p 9443:9443 -d ibmcom/websphere-traditional:install
I am trying to put into a kubernetes deploy file and I have this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: websphere
spec:
replicas: 1
template:
metadata:
labels:
app: websphere
spec:
containers:
- name: websphere
image: ibmcom/websphere-traditional:install
ports:
- containerPort: 9443
resources:
requests:
memory: 500Mi
cpu: 0.5
limits:
memory: 500Mi
cpu: 0.5
imagePullPolicy: Always
my service.yaml
apiVersion: v1
kind: Service
metadata:
name: websphere
labels:
app: websphere
spec:
type: NodePort #Exposes the service as a node ports
ports:
- port: 9443
protocol: TCP
targetPort: 9443
selector:
app: websphere
May I have guidance on how to map 2 ports in my deployment file?
You can add as many ports as you need.
Here your deployment.yml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: websphere
spec:
replicas: 1
template:
metadata:
labels:
app: websphere
spec:
containers:
- name: websphere
image: ibmcom/websphere-traditional:install
ports:
- containerPort: 9043
- containerPort: 9443
resources:
requests:
memory: 500Mi
cpu: 0.5
limits:
memory: 500Mi
cpu: 0.5
imagePullPolicy: IfNotPresent
Here your service.yml:
apiVersion: v1
kind: Service
metadata:
name: websphere
labels:
app: websphere
spec:
type: NodePort #Exposes the service as a node ports
ports:
- port: 9043
name: hello
protocol: TCP
targetPort: 9043
nodePort: 30043
- port: 9443
name: privet
protocol: TCP
targetPort: 9443
nodePort: 30443
selector:
app: websphere
Check on your kubernetes api-server configuration what is the range for nodePorts (usually 30000-32767, but it's configurable).
EDIT
If I remove from deployment.yml the resources section, it starts correctly (after about 5 mins).
Here a snippet of the logs:
[9/10/18 8:08:06:004 UTC] 00000051 webcontainer I
com.ibm.ws.webcontainer.VirtualHostImpl addWebApplication SRVE0250I:
Web Module Default Web Application has been bound to
default_host[:9080,:80,:9443,:506 0,:5061,:443].
Problems come connecting to it (I use ingress with traefik), because of certificates (I suppose):
[9/10/18 10:15:08:413 UTC] 000000a4 SSLHandshakeE E SSLC0008E:
Unable to initialize SSL connection. Unauthorized access was denied
or security settings have expired. Exception is
javax.net.ssl.SSLException: Unrecognized SSL message, plaintext
connection?
To solve that (I didn't go further) this may help: SSLHandshakeE E SSLC0008E: Unable to initialize SSL connection. Unauthorized access was denied or security settings have expired
Trying to connect with port-forward:
and using dthe browser to connect, I land on this page:
Well in kubernetes you can define your ports using #port label. This label comes under ports configuration in your deployment. According to the configurations you can simply define any numbers of ports you wish. Following example shows how to define two ports.
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- name: http
protocol: TCP
port: 80
targetPort: 9376
- name: https
protocol: TCP
port: 443
targetPort: 9377