Using istio 1.0.2 and kubernetes 1.12 on GKE.
When deploying a web application, the pod never reaches the healthy status.
My main pod spits out healthy logs.
However, my sidecar, i.e. the istio-proxy container reads:
* failed checking application ports. listeners="0.0.0.0:15090","10.8.48.10:53","10.8.63.194:15443","10.8.63.194:443","10.8.58.47:15011","10.8.54.249:42422","10.8.48.44:443","10.8.58.10:44134","10.8.54.34:443","10.8.63.194:15020","10.8.49.250:8080","10.8.63.194:31400","10.8.63.194:15029","10.8.63.194:15030","10.8.60.185:11211","10.8.49.0:53","10.8.61.194:443","10.8.48.1:443","10.8.48.180:80","10.8.51.133:443","10.8.63.194:15031","10.8.63.194:15032","0.0.0.0:9901","0.0.0.0:9090","0.0.0.0:80","0.0.0.0:3000","0.0.0.0:8060","0.0.0.0:15010","0.0.0.0:8080","0.0.0.0:20001","0.0.0.0:7979","0.0.0.0:9091","0.0.0.0:9411","0.0.0.0:15004","0.0.0.0:15014","0.0.0.0:3030","10.8.33.8:15020","0.0.0.0:15001"
* envoy missing listener for inbound application port: 5000
5000 is indeed the port my web app is listening on.
Any suggestions?
If there is a mismatch between deployment port and service port this can cause some issues in combination with the readiness of the sidecar.
Add the annotation readiness.status.sidecar.istio.io/applicationPorts in your deployment like this:
annotations:
readiness.status.sidecar.istio.io/applicationPorts: "5000"
You can add multiple ports by using comma separation.
#mkrobi I got this working as suggested in this post by adding the following-
readinessProbe:
httpGet:
path: /
port: 8080
scheme: HTTP
to the containers in my deployment. Make sure to change port 8080 to 5000.
Related
I'm using janusgraph docker image - https://hub.docker.com/r/janusgraph/janusgraph
In my kubernetes deployment to initialise the remote graph using groovy script mounted to docker-entrypoint-initdb.d
This works as expected but in case if the remote host is not ready the janusgraph container throws exception and is still in the running mode.
Because of this kubernetes will not attempt to restart the container again. Is there any way so that I can configure this janusgraph container to terminate in case of any exception
As #Gavin has mentioned you can use probes to check if containers are working. Liveness Probes is used to know when containers are failed. If a container is unresponsive - it can restart the container.
Readiness probes inform when the container is available for accepting traffic. The readiness probe is used to control which pods are used as the backends for a service. A pod is considered ready when all of its containers are ready. If a pod is not ready, it is removed from service Endpoints.
Kubernetes supports three mechanisms for implementing liveness and readiness probes:
1) making an HTTP request against a container
This probes have additional fields that can be set on httpGet:
host: Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead.
scheme: Scheme to use for connecting to the host (HTTP or HTTPS). Defaults to HTTP.
path: Path to access on the HTTP server. Defaults to /.
httpHeaders: Custom headers to set in the request. HTTP allows repeated headers.
port: Name or number of the port to access on the container. Number must be in the range 1 to 65535.
Read more: http-probes.
livenessProbe:
httpGet:
path: /healthz
port: liveness-port
2) opening a TCP socket against a container
initialDelaySeconds: 15
livenessProbe: ~
periodSeconds: 20
port: 8080
tcpSocket: ~
3) running a command inside a container
livenessProbe:
exec:
command:
- sh
- /tmp/status_check.sh
initialDelaySeconds: 10
If you will get status code different than 0 this will mean that probe failed.
You can also add to probes additional params such as initialDelaySeconds: indicate number of seconds after the container has started before liveness or readiness probes are initiated. See: configuring-probes.
In every case add also restartPolicy: Never
to your pods definition. By default is always.
A readinessProbe could be employed here with a command like janusgraph show-config or something similar which will exit with code -1
spec:
containers:
- name: liveness
image: janusgraph/janusgraph:latest
readinessProbe:
exec:
command:
- janusgraph
- show-config
Kubernetes will terminate the pod if the readinessProbe fails. A livenessProbe could also be used here too, in case this pod needs to be terminated if the remote host ever becomes unavailable.
Consider enabling JanusGraph server metrics, which could then be used with Prometheus for additional monitoring or even with the livenessProbe itself.
I am exploring the istio service mesh on my k8s cluster hosted on EKS(Amazon).
I tried deploying istio-1.2.2 on a new k8s cluster with the demo.yml file used for bookapp demonstration and most of the use cases I understand properly.
Then, I deployed istio using helm default profile(recommended for production) on my existing dev cluster with 100s of microservices running and what I noticed is my services can can call http endpoints but not able to call external secure endpoints(https://www.google.com, etc.)
I am getting :
curl: (35) error:1400410B:SSL routines:CONNECT_CR_SRVR_HELLO:wrong
version number
Though I am able to call external https endpoints from my testing cluster.
To verify, I check the egress policy and it is mode: ALLOW_ANY in both the clusters.
Now, I removed the the istio completely from my dev cluster and install the demo.yml to test but now this is also not working.
I try to relate my issue with this but didn't get any success.
https://discuss.istio.io/t/serviceentry-for-https-on-httpbin-org-resulting-in-connect-cr-srvr-hello-using-curl/2044
I don't understand what I am missing or what I am doing wrong.
Note: I am referring to this setup: https://istio.io/docs/setup/kubernetes/install/helm/
This is most likely a bug in Istio (see for example istio/istio#14520): if you have any Kubernetes Service object, anywhere in your cluster, that listens on port 443 but whose name starts with http (not https), it will break all outbound HTTPS connections.
The instance of this I've hit involves configuring an AWS load balancer to do TLS termination. The Kubernetes Service needs to expose port 443 to configure the load balancer, but it receives plain unencrypted HTTP.
apiVersion: v1
kind: Service
metadata:
name: breaks-istio
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:...
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
spec:
selector: ...
ports:
- name: http-ssl # <<<< THIS NAME MATTERS
port: 443
targetPort: http
When I've experimented with this, changing that name: to either https or tcp-https seems to work. Those name prefixes are significant to Istio, but I haven't immediately found any functional difference between telling Istio the port is HTTPS (even though it doesn't actually serve TLS) vs. plain uninterpreted TCP.
You do need to search your cluster and find every Service that listens to port 443, and make sure the port name doesn't start with http-....
So we're deploying istio 1.0.2 with global mtls and so far it's gone well.
For health checks we've added separate ports to the services and configured them as per the docs:
https://istio.io/docs/tasks/traffic-management/app-health-check/#mutual-tls-is-enabled
Our application ports are now on 8080 and health checks ports are on 8081.
After doing this Kubernetes is able to do health checks and the services appear to be running normally.
However our monitoring solution cannot hit the health check port.
The monitoring application also sits in kubernetes and is currently outside the mesh. The above doc says the following:
Because the Istio proxy only intercepts ports that are explicitly declared in the containerPort field, traffic to 8002 port bypasses the Istio proxy regardless of whether Istio mutual TLS is enabled.e
This is how we have it configured. So in our case 8081 should be outside the mesh:
livenessProbe:
failureThreshold: 3
httpGet:
path: /manage/health
port: 8081
scheme: HTTP
initialDelaySeconds: 180
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: <our-service>
ports:
- containerPort: 8080
name: http
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /manage/health
port: 8081
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
However we can't access 8081 from another pod which is outside the mesh.
For example:
curl http://<our-service>:8081/manage/health
curl: (7) Failed connect to <our-service>:8081; Connection timed out
If we try from another pod inside the mesh istio throws back a 404, which is perhaps expected.
I tried to play around with destination rules like this:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: <our-service>-health
spec:
host: <our-service>.namepspace.svc.cluster.local
trafficPolicy:
portLevelSettings:
- port:
number: 8081
tls:
mode: DISABLE
But that just kills all connectivity to the service, both internally and through the ingress gateway.
According to the official Istio Documentation port 8081 will not get through Istio Envoy, hence won’t be accessible for the other Pods outside your service mesh, because Istio proxy determines only the value of containerPort transmitting through the Pod's service.
In case you build Istio service mesh without TLS authentication between Pods, there is an option to use the same port for the basic network route to the Pod's service and the readiness/liveness probes.
However, if you use port 8001 for both regular traffic and liveness
probes, health check will fail when mutual TLS is enabled because the
HTTP request is sent from Kubelet, which does not send client
certificate to the liveness-http service.
Assuming that Istio Mixer provides a three Prometheus endpoints, you can consider using Prometheus as the main monitoring tool in order to collect and analyze the mesh metrics.
I have a CockroachDB instance running in a Kubernetes cluster on Google Kubernetes Engine. I am trying to expose port 26257 so I can connect to it from my local machine.
As stated in this answer, port forwarding to the pod will not work.
I have an nginx-ingress controller which is used to map from my domain name paths to services, so I tried to use that:
I changed my db-cockroachdb-public service from ClusterIP to NodePort:
type: NodePort
I added these lines to my nginx-controller YAML:
-name: postgresql
nodePort: 30472
port: 26257
protocol: TCP
targetPort: 26257
and these lines to my ingress YAML:
- host: db.mydomain.com
http:
paths:
- path: /
backend:
serviceName: db-cockroachdb-public
servicePort: 26257
However, I'm unable to connect to the database - connection gets refused. I also tried to disable SSL redirects in the nginx controller, but it still doesn't work.
I also tried a ConfigMap but it didn't do anything:
https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/exposing-tcp-udp-services.md
There are a few ways to fix this. Most are related to changing your ingress configuration or how you're connecting to the service, which I'm not going to go into. Another option is to make port forwarding work to eliminate the need for the ingress machinery.
You can make port forwarding work by modifying the CockroachDB config file slightly. Change the name of the --host flag in the invocation of the Cockroach binary to be --advertise-host instead. That way, the process will listen on localhost in addition to on its hostname, which will make port forwarding work.
edit: To follow up on this, I've switched the default configuration in the CockroachDB repo to use --advertise-host instead of --host, so port forwarding works by default now.
I don't know if it technically should work to proxy a CockroachDB through a nginx instance, but your setup fails for another reason. When specifying a servicePort in the rules section, you tell k8s which port is exposed to the service. The mapping itself happens by default to port 80/443, not your desired port. So you should try just to ask port 80 in your case.
I have deployed a grpc service running on OpenShift Origin. And this backed by a OpenShift service. And the service is exposed with an OpenShift route. I am trying to make this pod available via a service and route that maps the container port (50051) to outside world on port 8080.
The image that the service is trying to expose has, in its Dockerfile:
EXPOSE 50051
The route has the following:
Service Port: 8080/TCP
Target Port: 50051
In the DeploymentConfig I specify the port with:
ports:
- containerPort: 50051
protocol: TCP
However, when I try to access the application via the route and port, I get (from Java)
java.net.NoRouteToHostException: No route to host
And when I try to telnet the service IP:
telnet 172.30.197.247 8080
I am able to connect.
However, when I try to connect via the route it doesnt work:
telnet my.route.com 8080
Trying ...
telnet: connect to address : Connection refused
When I use:
curl -kv my-svc.myproject.svc.cluster.local:8080
I can connect.
So it seems the service is working but the route is not.
I have been going through the troubleshooting guide on https://docs.openshift.org/3.6/admin_guide/sdn_troubleshooting.html#debugging-the-router
The router setups in OpenShift focus on HTTP/HTTPS(SNI)/TLS(SNI). However it appears that you can use an externalIP to expose non-web application ports from the cluster. Because gRPC is an over the wire protocol, you might need to go this path.
There are multiple things to check :
Is you route point to your service ? Here is a example :
apiVersion: v1
kind: Route
spec:
host: my.route.com
to:
kind: Service
name: yourservice
weight: 100
If it's not the case, the route and the service are not connected.
You can check the router configuration. Connect to your router with oc rsh and check if you find your route name in the /var/lib/haproxy/conf/haproxy.config (the backend name format should be backend be_http_NAMESPACE_ROUTENAME). The server part below the backend part should contains the ip of your pod (you can obtain your pod ip with oc get pods -o wide command).
If it's not the case, the route is not registered in the router config. You can try to restart the router end recheck the haproxy.config file.
Can you connect to the pod ip from the router container ?