OpenShift Service Proxy timeout - rest

I have an application deployed on OpenShift Container Platform v3.6. It consists of multiple services interconnected to each other.
The frontend service calls a time consuming function of the backend service (through a REST call), but after 30 seconds it receives a "504 Gateway Timeout" message. Frontend runs over nginx, but I've already configured it with long proxy send/read timeouts, so the 504 message doesn't come from it. I think it comes from the Service Proxy component of OpenShift Platform, but I can't find out where and how configure a kind of service proxy timeout. I know the existence of HAProxy timeout for external routes, but my services leave in the same cluster application and communicate each other via OpenShift Container Platform DNS.
Could be a Service Proxy timeout issue? How can it be configured?
Thanks!

Your route timeout is the culprit. The haproxy ingress router is terminating the request. You can configure the timeout by following the docs below:
https://docs.openshift.com/container-platform/3.5/install_config/configuring_routing.html
For example:
# Set the timeout on 'longrunningroute' to five minutes.
oc annotate route longrunningroute --overwrite haproxy.router.openshift.io/timeout=5m

In my case I didn't annotate the route myself but added the annotation to the Ingress.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example
namespace: my-namespace
annotations:
haproxy.router.openshift.io/timeout: 600s
spec:
tls:
- hosts:
- example.com
secretName: https-tls-secret
rules:
- host: example.com
http:
paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: test
port:
number: 80
The routes are managed by the ingress and therefore inherit the annotations from it.

Related

Kubernetes expose a service on a port over tls

I have my application https://myapp.com deployed on K8S, with an nginx ingress controller. HTTPS is resolved at nginx.
Now there is a need to expose one service on a specific port for example https://myapp.com:8888. Idea is to keep https://myapp.com secured inside the private network and expose only port number 8888 to the internet for integration.
Is there a way all traffic can be handled by the ingress controller, including tls termination, and it can also expose 8888 port and map it to a service?
Or
I need another nginx terminating tls and exposed on nodeport? I am not sure if I can access services like https://myapp.com:<node_port> with https.
Is using multiple ingress controllers an option?
What is the best practice to do this in Kubernetes?
Use sidecar proxy pattern to add HTTPS support to the application running inside the pod.
Refer the below diagram as a reference
Run nginx as a sidecar proxy container fronting the application container inside the same pod. Access the application through port 8888 on nginx proxy. nginx would route the traffic to the application.
Find below the post showing how it can be implemented
https://vorozhko.net/kubernetes-sidecar-pattern-nginx-ssl-proxy-for-nodejs
It is not a best practices to expose custom port over internet.
Instead, create a sub-domain (i.e https://custom.myapp.com) which point to internal service in port 8888.
Then to create separate nginx ingress (not ingress controller) which point to that "https://custom.myapp.com" sub domain
Example manifest file as follow:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-myapp-service
namespace: abc
rules:
- host: custom.myapp.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp-service
port:
number: 8888
Hope this helps.
So you have service foo on some port, which you want to have available on your internal network. Then service bar, which runs on port 8888 in that same pod.
It's as simple as setting up two services to that pod, with different spec.ports[].targetPort values. My example assumes a svc foo pointing at port 80, and svc bar pointing at port 8888 on the pod.
Take care that generally, the ingress controller only services HTTP and HTTPS connections on ports 80 and 443. That is a network setting generally defined for the nodes that are running the ingress controller. TCP/UDP are not serviced out-of-the-box by the ingress controller
My advice is to use something like this, and use path to expose the required service.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-wildcard-host
spec:
rules:
- host: "myapp.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: foo
port:
number: 80
- pathType: Prefix
path: "/bar"
backend:
service:
name: bar
port:
number: 80
If you would want to further secure your network, you should probably take a look at networkpolicies. They allow configuration of granular access to pods and services. You can, for example, only allow external ingress to that pod to port 8888.

Traefik behind ssl terminating load balancer return 404

I have a K8s setup with traefik being exposed like this
kubernetes:
ingressClass: traefik
service:
nodePorts:
http: 32080
serviceType: NodePort
Behind, I forward some requests to different services
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-name
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: my-host.com
http:
paths:
- path: /my-first-path
backend:
serviceName: my-nodeJs-services
servicePort: 3000
When the DNS is set directly to resolve to my ip, the application works fine with HTTP
http://my-host.com:32080/my-first-path
But when some one add SSL through AWS ALB / API Gateway, the application fail to be reached with 404-NotFound error
The route is like this
https://my-host.com/my-first-path
On the AWS size, they configured something like this
https://my-host.com => SSL Termination and => Forward all to 43.43.43.43:32080
I think this fail because traefik is expecting http://my-host.com but not https://my-host.com which lead to its failure to find the matching route? Or maybe at the ssl termination time, the hostname is lost so that traefik can not find a route?
What should I do in this situation?
I am not very familiar with ALB but what is probably happening is that the requests received by the loadbalancer contain the header Host: my-host.com and when it gets forwarded to your ingress controller, the header is replaced by Host: 43.43.43.43. If this is the case, I see 3 solutions:
ALB might be able to pass the original Host header to the target. (You will have to check in the doc if it's possible)
If the application behind your ingress doesn't check the host header, you can write an ingress that doesn't check a specific host. For example on these examples you can see that the host field is not specified.
If the name resolution works internally, you can define a name for your target, use this name in your ALB and in your ingress.

Expose pod's tomcat port

I have the bare metall kubernetes pod running tomcat application on port 8085. If it would be common server, the app would be accessible via http://<server-ip>:8085/app. My goal is to expose the tomcat on Kubernetes node's address and the same port as used in tomcat.
I am able to expose and access app using Node Port service - but it is inconvenient that port is always different.
I tried to setup traefik ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-tag2
spec:
rules:
- host: kubernetes.example.com #in my conf I use node's domain name
http:
paths:
- path: /test
backend:
serviceName: test-tag2
servicePort: 8085
And I can see result in Traefik's dashboard, but still if I navigate to http://kubernetes.example.com/test/app I get nothing.
I've tried a bunch of ways to configure that and still no luck.
Is it actually possible to expose my pod in this way?
Did you try specifying a nodePort value in the service yaml? If specified, kubernetes will create service on the specified NodePort. If the nodePort is not available , kubernetes doesn't create the service.
Refer to this answer for more details:
https://stackoverflow.com/a/43944385/1237402

How do I make my admin ui of cockroachdb publicly available via traefik ingress controller on kubernetes?

Kubernetes dedicated cockroachdb node - accessing admin ui via traefik ingress controller fails - page isn't redirecting properly
I have a dedicated kubernetes node running cockroachdb. The pods get scheduled and everything is setup. I want to access the admin UI from a subdomain like so: cockroachdb.hostname.com. I have done this with traefik dashboard and ceph dashboard so I know my ingress setup is working. I even have cert-manager running to have https enabled. I get the error from the browser that the page is not redirecting properly.
Do I have to specify the host name somewhere special?
I have tried adding this with no success: --http-host cockroachdb.hostname.com
This dedicated node has its own public ip which is not mapped to hostname.com. I think I need to change a setting in cockroachdb, but I don't know which because I am new to it.
Does anyone know how to publish admin UI via an ingress?
EDIT01: Added ingress and service config files
Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cockroachdb-public
annotations:
kubernetes.io/ingress.class: traefik
traefik.frontend.rule.type: PathPrefixStrip
certmanager.k8s.io/issuer: "letsencrypt-prod"
certmanager.k8s.io/acme-challenge-type: http01
ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/ssl-temporary-redirect: "true"
ingress.kubernetes.io/ssl-host: "cockroachdb.hostname.com"
traefik.frontend.rule: "Host:cockroachdb.hostname.com,www.cockroachdb.hostname.com"
traefik.frontend.redirect.regex: "^https://www.cockroachdb.hostname.com(.*)"
traefik.frontend.redirect.replacement: "https://cockroachdb.hostname.com/$1"
spec:
rules:
- host: cockroachdb.hostname.com
http:
paths:
- path: /
backend:
serviceName: cockroachdb-public
servicePort: http
- host: www.cockroachdb.hostname.com
http:
paths:
- path: /
backend:
serviceName: cockroachdb-public
servicePort: http
tls:
- hosts:
- cockroachdb.hostname.com
- www.cockroachdb.hostname.com
secretName: cockroachdb-secret
Serice:
apiVersion: v1
kind: Service
metadata:
# This service is meant to be used by clients of the database. It exposes a ClusterIP that will
# automatically load balance connections to the different database pods.
name: cockroachdb-public
labels:
app: cockroachdb
spec:
ports:
# The main port, served by gRPC, serves Postgres-flavor SQL, internode
# traffic and the cli.
- port: 26257
targetPort: 26257
name: grpc
# The secondary port serves the UI as well as health and debug endpoints.
- port: 8080
targetPort: 8080
name: http
selector:
app: cockroachdb
EDIT02:
I can access the Admin UI page now but only by going over the external ip address of the server with port 8080. I think I need to tell my server that its ip address is mapped to the correct sub domain?
EDIT03:
On both scheduled traefik-ingress pods the following logs are created:
time="2019-04-29T04:31:42Z" level=error msg="Service not found for default/cockroachdb-public"
Your referencing looks good on the ingress side. You are using quite a few redirects, unless you really know what each one is accomplishing, don't use them, you might end up in an infinite loop of redirects.
You can take a look at the following logs and methods to debug:
Run kubectl logs <traefik pod> and see the last batch of logs.
Run kubectl get service, and from what I hear, this is likely your main issue. Make sure your service exists in the default namespace.
Run kubectl port-forward svc/cockroachdb-public 8080:8080 and try connecting to it through localhost:8080 and see terminal for potential error messages.
Run kubectl describe ingress cockroachdb-public and look at the events, this should give you something to work with.
Try accessing the service from another pod you have running ping cockroachdb-public.default.svc.cluster.local and see if it resolves the IP address.
Take a look at your clusterrolebindings and serviceaccount, it might be limited and not have permission to list services in the default namespace: kubectl create clusterrolebinding default-admin --clusterrole cluster-admin --serviceaccount=default:default

Kubernetes reverse proxy pod for node-specific services?

In a Kubernetes cluster I have a per-node HTTP node-specific service deployed (using a DaemonSet). This service returns node-specific data (which is otherwise not available via the cluster/remote API). I cannot make use of a Kubernetes Service, as this would result in kind of a service roulette, as the client cannot control the exact node to which to connect (forward the HTTP request) to. As the service needs to return node-specific data, this would cause data to be returned for a random node, but not for the node the client wants.
My suspection is that I need a reverse proxy that uses a part of its own URL path to relay an incomming HTTP request deterministically to exactly the node the client indicates. This proxy then in turn could be either accessed by clients using the cluster/remote API service proxy functionality.
http://myservice/node1/.. --> http://node1:myservice/...
http://myservice/node2/... --> http://node2:myservice/...
...
Is there a ready-made pod (or Helm chart) available that maps a service running on all cluster nodes to a single proxy URL, with some path component specifying the node whose service instance to relay to? Is there some way to restrict the reverse proxy to relay only to those nodes being specified in the DaemonSet of the pod spec defining my per-node service?
Additionally, is there some ready-made "hub page" available for the reverse proxy listing/linking to only those nodes were my service is currently running on?
Or is this something where I need to create my own reverse proxy setup specifically? Is there some integration between, e.g. nginx and Kubernetes?
It is almost impossible if you use DaemonSet, because you can't add a unique label for the pod in DaemonSet. If you need to distribute one pod per node you can use podaffinity. with StatefulSet or Deployments.
Then create a service for each node:
kind: Service
apiVersion: v1
metadata:
name: svc-for-node1
spec:
selector:
nodename: unique-label-for-pod-on-node
ports:
- protocol: TCP
port: 80
targetPort: 9376
And final setup Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /svc-for-node1
backend:
serviceName: svc-for-node1
servicePort: 80
- path: /svc-for-node2
backend:
serviceName: svc-for-node2
servicePort: 80