Does Cadence Service provide any health check API endpoint - cadence-workflow

Does Cadence Service provide any health check API endpoint to monitor and ensure its availability.

For K8 deployment, I've see this:
livenessProbe:
initialDelaySeconds: 150
tcpSocket:
port: rpc
readinessProbe:
initialDelaySeconds: 10
tcpSocket:
port: rpc
But it's not http

Related

Openshift readiness and liveness probe never failing even with incorrect http url

I am running a Spring Boot 2.0.0 application inside an OpenShift Pod. To execute the readiness and liveness probe, I am relying on spring boot actuator healthchecks. My application properties file has following properties :
server.address=127.0.0.1
management.server.address=0.0.0.0
server.port=8080
management.server.port=8081
management.endpoints.enabled-by-default=false
management.endpoint.info.enabled=true
management.endpoint.health.enabled=true
management.endpoints.web.exposure.include=health,info
management.endpoint.health.show-details=always
management.security.enabled=false
Following the related configuration of the readiness and liveness probe.
livenessProbe:
failureThreshold: 3
httpGet:
path: /mmcc
port: 8081
scheme: HTTP
initialDelaySeconds: 35
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 15
readinessProbe:
failureThreshold: 3
httpGet:
path: /jjkjk
port: 8081
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 15
My expectation is that my readiness and liveness probe should fail with these random urls, but they are succeeding.
Not sure what i am missing here. kindly help.
The answer by Simon gave me a starting point and I looked for curl -vvv localhost:8081/jjkjk output.
The URL was redirecting me to the login url so I figured this is because I have spring security in my classpath.
So I added in my properties file :
management.endpoints.web.exposure.include=health,info
and added
#Configuration
public class ActuatorSecurity extends WebSecurityConfigurerAdapter {
#Override
protected void configure(HttpSecurity http) throws Exception {
http.requestMatcher(EndpointRequest.toAnyEndpoint()).authorizeRequests()
.anyRequest().permitAll()
}
}
and this resolved my problem by enabling the access to url without credentials.

How to put two prots in "livenessProbe"?

My legacy server listens on two TCP ports. I want to put livenessProbe and readinessProbe on two ports. For single port it looks like the following. How to do it for 2 ports ?
livenessProbe:
tcpSocket:
port: 15772
initialDelaySeconds: 10
periodSeconds: 5
failureThreshold: 5
readinessProbe:
tcpSocket:
port: 15772
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 5
Kubernetes pod object will not allow to keep more than one liveness and readiness probe per container. if the pod contains multiple containers, you can define multiple liveness/readiness probes for every containers.

Should i create a API for readinessprobe to work kubernetes

I am trying to create a RollingUpdate and trying to use below code to see if pod came up or not. Should i create explicit API path like /healthz in my application so that kubernetes pings it and gets 200 status back or else its internal url for kubernetes?
specs:
containers:
- name: liveness
readinessProbe:
httpGet:
path: /healthz
port: 80
As#Thomas answered the Http probe, If application does not provide a endpoint to validate the success response. you can use TCP Probe
kubelet tries to establish a TCP connection on the container's port. If it can establish a connection, the container is considered healthy; if it can’t it is considered unhealthy
for example, in your case it would be like this
ports:
- containerPort: 80
readinessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 15
periodSeconds: 20
You can get further information over here configure-liveness-readiness-probes/
Kubernetes will make a request to the container on port 80 and path /healthz and expects a status code in the range of 2xx-3xx to be considered successful.
If your application does not provide a mapping for the path and returns a 404, kubernetes assumes that the health check fails.
Depending on your application you need to manually provide the API, if it is not done by your framework. (You can check using a curl or wget to the path from another pod and verify the result)

Openshift - liveness probe not working for http

I have configured liveness probe using httpGet but it's failing with *error Client.Timeout exceeded while awaiting headers*
But the same API is working fine inside the container(using curl) and outside the container(postman).
I have tried adding host attribute in liveness probe but no luck.
Any idea what's going wrong.
Liveness Probe:
livenessProbe:
initialDelaySeconds: 45
periodSeconds: 10
httpGet:
path: /health
port: xxxxx
timeoutSeconds: 5
Version details:
OpenShift Master->v3.9.0+ba7faec-1
Kubernetes Master->v1.9.1+a0ce1bc657
OpenShift Web Console->v3.9.0+b600d46-dirty
Try increasing the initialDelaySeconds, port lower, and check for any transitive features (such as PVC) causing slow loading:
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 200
timeoutSeconds: 10
periodSeconds: 10
successThreshold: 1
failureThreshold: 5
PS: For success on probe validation your HTTP return status must be greater than or equal to 200 and less than 400.
Hope this helps

Accessing service health checks ports after configuring istio

So we're deploying istio 1.0.2 with global mtls and so far it's gone well.
For health checks we've added separate ports to the services and configured them as per the docs:
https://istio.io/docs/tasks/traffic-management/app-health-check/#mutual-tls-is-enabled
Our application ports are now on 8080 and health checks ports are on 8081.
After doing this Kubernetes is able to do health checks and the services appear to be running normally.
However our monitoring solution cannot hit the health check port.
The monitoring application also sits in kubernetes and is currently outside the mesh. The above doc says the following:
Because the Istio proxy only intercepts ports that are explicitly declared in the containerPort field, traffic to 8002 port bypasses the Istio proxy regardless of whether Istio mutual TLS is enabled.e
This is how we have it configured. So in our case 8081 should be outside the mesh:
livenessProbe:
failureThreshold: 3
httpGet:
path: /manage/health
port: 8081
scheme: HTTP
initialDelaySeconds: 180
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: <our-service>
ports:
- containerPort: 8080
name: http
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /manage/health
port: 8081
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
However we can't access 8081 from another pod which is outside the mesh.
For example:
curl http://<our-service>:8081/manage/health
curl: (7) Failed connect to <our-service>:8081; Connection timed out
If we try from another pod inside the mesh istio throws back a 404, which is perhaps expected.
I tried to play around with destination rules like this:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: <our-service>-health
spec:
host: <our-service>.namepspace.svc.cluster.local
trafficPolicy:
portLevelSettings:
- port:
number: 8081
tls:
mode: DISABLE
But that just kills all connectivity to the service, both internally and through the ingress gateway.
According to the official Istio Documentation port 8081 will not get through Istio Envoy, hence won’t be accessible for the other Pods outside your service mesh, because Istio proxy determines only the value of containerPort transmitting through the Pod's service.
In case you build Istio service mesh without TLS authentication between Pods, there is an option to use the same port for the basic network route to the Pod's service and the readiness/liveness probes.
However, if you use port 8001 for both regular traffic and liveness
probes, health check will fail when mutual TLS is enabled because the
HTTP request is sent from Kubelet, which does not send client
certificate to the liveness-http service.
Assuming that Istio Mixer provides a three Prometheus endpoints, you can consider using Prometheus as the main monitoring tool in order to collect and analyze the mesh metrics.