One Deployment is reported with unhealthy upstream - kubernetes

Using Istio with Kubernetes and facing an issue with a simple routerule splitting traffic between two Deployments. Only one of the Deployment (howdy) is returning results correctly. The other Deployment (hello) is reporting "no healthy upstream".
Here is the k8s manifest:
apiVersion: v1
kind: Service
metadata:
name: greeting
labels:
name: greeting
spec:
selector:
app: greeting
ports:
- name: http
protocol: TCP
port: 8081
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: greeting-hello
labels:
name: greeting-hello
spec:
replicas: 1
template:
metadata:
labels:
app: greeting
greeting: hello
spec:
containers:
- name: greeting
image: arungupta/greeting:hello
imagePullPolicy: Always
ports:
- containerPort: 8081
readinessProbe:
httpGet:
path: /resources/greeting
port: 8081
initialDelaySeconds: 50
periodSeconds: 5
--- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: greeting-howdy
labels:
name: greeting-howdy
spec:
replicas: 1
template:
metadata:
labels:
app: greeting
greeting: howdy
spec:
containers:
- name: greeting
image: arungupta/greeting:howdy
imagePullPolicy: Always
ports:
- containerPort: 8081
readinessProbe:
httpGet:
path: /resources/greeting
port: 8081
initialDelaySeconds: 50
periodSeconds: 5
And the route is:
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: greeting-50-50
spec:
destination:
name: greeting
route:
- labels:
app: greeting
greeting: hello
weight: 100
- labels:
app: greeting
greeting: howdy
weight: 0
Any idea?
This is also tracked at https://github.com/aws-samples/aws-microservices-deploy-options/issues/239

I've reproduced your deployment issue and found the following errors during the process:
Warning Unhealthy 48m (x7 over 49m) kubelet, gke-cluster-1-default-pool-e44042f6-kndh Readiness probe failed: Get http://10.0.2.30:8081/resources/greeting: dial tcp 10.0.2.30:8081: getsockopt: connection refused
I would recommend you to check if docker containers in this particular configuration are
exposing endpoints on the specified (8081) port. I also found that pods are not registered in any
of the endpoints:
$ kubectl describe service
[...]
IP: 10.3.247.97
Port: http 8081/TCP
TargetPort: 8081/TCP
Endpoints:
Session Affinity: None
Events: <none>
The problem is that the application did not bind to the port which was set as a container port,
and the application did not receive connection.

Related

Access pod from another pod with kubernetes url

I have two pods created with deployment and service. my problem is as follows the pod "my-gateway" accesses the url "adm-contact" of "http://127.0.0.1:3000/adm-contact" which accesses another pod called "my-adm-contact" as can i make this work? I tried the following command: kubectl port-forward my-gateway-5b85498f7d-5rwnn 3000:3000 8879:8879 but it gives this error:
E0526 21:56:34.024296 12428 portforward.go:400] an error occurred forwarding 3000 -> 3000: error forwarding port 3000 to pod 2d5811c20c3762c6c249a991babb71a107c5dd6b080c3c6d61b4a275b5747815, uid : exit status 1: 2022/05/27 00:56:35 socat[2494] E connect(16, AF=2 127.0.0.1:3000, 16): Connection refused
Remembering that the images created with dockerfile are with EXPOSE 3000 8879
follow my yamls:
Deployment my-adm-contact:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-adm-contact
labels:
app: my-adm-contact
spec:
selector:
matchLabels:
run: my-adm-contact
template:
metadata:
labels:
run: my-adm-contact
spec:
containers:
- name: my-adm-contact
image: my-contact-adm
imagePullPolicy: Never
ports:
- containerPort: 8879
hostPort: 8879
name: admcontact8879
readinessProbe:
httpGet:
path: /adm-contact
port: 8879
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 6
Sevice my-adm-contact:
apiVersion: v1
kind: Service
metadata:
name: my-adm-contact
labels:
run: my-adm-contact
spec:
selector:
app: my-adm-contact
ports:
- name: 8879-my-adm-contact
port: 8879
protocol: TCP
targetPort: 8879
type: LoadBalancer
status:
loadBalancer: {}
Deployment my-gateway:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-gateway
labels:
app: my-gateway
spec:
selector:
matchLabels:
run: my-gateway
template:
metadata:
labels:
run: my-gateway
spec:
containers:
- name: my-gateway
image: api-gateway
imagePullPolicy: Never
ports:
- containerPort: 3000
hostPort: 3000
name: home
#- containerPort: 8879
# hostPort: 8879
# name: adm
readinessProbe:
httpGet:
path: /adm-contact
port: 8879
path: /
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 6
Service my-gateway:
apiVersion: v1
kind: Service
metadata:
name: my-gateway
labels:
run: my-gateway
spec:
selector:
app: my-gateway
ports:
- name: 3000-my-gateway
port: 3000
protocol: TCP
targetPort: 3000
- name: 8879-my-gateway
port: 8879
protocol: TCP
targetPort: 8879
type: LoadBalancer
status:
loadBalancer: {}
What k8s-cluster environment are you running this in? I ask because the service.type of LoadBalancer is a special kind: at pod initialisation your cloud provider's admission controller will spot this and add in a loadbalancer config See https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
If you're not deploying this in a suitable cloud environment, your services won't do anything.
I had a quick look at your SO profile and - sorry if this is presumptious, I don't mean to be - it looks like you're relatively new to k8s. You shouldn't need to do any port-forwarding/kubectl proxying, and this should be a lot simpler than you might think.
When you create a service k8s will 'create' a DNS entry for you which points to the pod(s) specified by your selector.
I think you're trying to reach a setup where code running in my-gateway pod can connect to http://adm-contact on port 3000 and reach a listening service on the adm-contact pod. Is that correct?
If so, the outline solution is to expose tcp/3000 in the adm-contact pod, and create a service called adm-contact that has a selector for adm-contact pod.
This is a sample manifest I've just created which runs nginx and then creates a service for it, allowing any pod on the cluster to connect to it e.g. curl http://nginx-service.default.svc In this example I'm exposing port 80 because I didn't want to have to modify the nginx config, but the principle is the same.
apiVersion: v1
kind: Pod
metadata:
labels:
app: nginx
name: nginx
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
ports:
- containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
type: ClusterIP
The k8s docs on Services are pretty helpful if you want more https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/
a service can be reached on it's own name from pods in it's namespace:
so a service foo in namespace bar can be reached at http://foo from a pod in namespace bar
from other namespaces that service is reachable at http://foo.bar.svc.cluster.local. Change out the servicename and namespace for your usecase.
k8s dns is explained here in the docs:
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
I have taken the YAML you provided and assembled it here.
From another comment I see the URL you're trying to connect to is: http://gateway-service.default.svc.cluster.local:3000/my-adm-contact-service
The ability to resolve service names to pods only functions inside the cluster: coredns (a k8s pod) is the part which recognises when a service has been created and what IP(s) it's available at.
So another pod in the cluster e.g. one created by kubectl run bb --image=busybox -it -- sh would be able to resolve the command ping gateway-service, but pinging gateway-service from your desktop will fail because they're not both seeing the same DNS.
The api-gateway container will be able to make a connect to my-adm-contact-service on ports 3000 or 8879, and the my-adm-contact container will equally be able to connect to gateway-service on port 3000 - but only when those containers are running inside the cluster.
I think you're trying to access this from outside the cluster, so now the port/service types are correct you could re-try a kubectl port-forward svc/gateway-service 3000:3000 This will let you connect to 127.0.0.1:3000 and the traffic will be routed to port 3000 on the api-gateway container.
If you need to proxy to the other my-adm-contact-service then you'll have to issue similar kubectl commands in other shells, one per service:port combination. For completeness, if you wanted to route traffic from your local machine to all three container/port sets, you'd run:
# format kubectl port-forward svc/name src:dest (both TCP)
kubectl port-forward svc/gateway-service 3000:3000
kubectl port-forward svc/my-adm-contact-service 8879:8879
kubectl port-forward svc/my-adm-contact-service 3001:3000 #NOTE the changed local port, because localhost:3000 is already used
You will need a new shell for each kubectl, or run it as a background job.
apiVersion: v1
kind: Pod
metadata:
name: my-adm-contact
labels:
app: my-adm-contact
spec:
containers:
- image: my-contact-adm
imagePullPolicy: Never
name: my-adm-contact
ports:
- containerPort: 8879
protocol: TCP
- containerPort: 3000
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: my-adm-contact-service
spec:
ports:
- port: 8879
protocol: TCP
targetPort: 8879
name: adm8879
- port: 3000
protocol: TCP
targetPort: 3000
name: adm3000
selector:
app: my-adm-contact
type: ClusterIP
---
apiVersion: v1
kind: Pod
metadata:
name: my-gateway
labels:
app: my-gateway
spec:
containers:
- image: api-gateway
imagePullPolicy: Never
name: my-gateway
ports:
- containerPort: 3000
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: gateway-service
spec:
ports:
- port: 3000
protocol: TCP
targetPort: 3000
selector:
app: my-gateway
type: ClusterIP

502 Bad Gateway when deploying Apollo Server application to GKE

I'm trying to deploy my Apollo Server application to my GKE cluster. However, when I visit the static IP for my site I receive a 502 Bad Gateway error. I was able to get my client to deploy properly in a similar fashion so I'm not sure what I'm doing wrong. My deployment logs seem to show that the server started properly. However my ingress indicates that my service is unhealthy since it seems to be failing the health check.
Here is my deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: <DEPLOYMENT_NAME>
labels:
app: <DEPLOYMENT_NAME>
spec:
replicas: 1
selector:
matchLabels:
app: <POD_NAME>
template:
metadata:
name: <POD_NAME>
labels:
app: <POD_NAME>
spec:
serviceAccountName: <SERVICE_ACCOUNT_NAME>
containers:
- name: <CONTAINER_NAME>
image: <MY_IMAGE>
imagePullPolicy: Always
ports:
- containerPort: <CONTAINER_PORT>
- name: cloud-sql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.17
command:
- '/cloud_sql_proxy'
- '-instances=<MY_PROJECT>:<MY_DB_INSTANCE>=tcp:<MY_DB_PORT>'
securityContext:
runAsNonRoot: true
My service.yml
apiVersion: v1
kind: Service
metadata:
name: <MY_SERVICE_NAME>
labels:
app: <MY_SERVICE_NAME>
annotations:
cloud.google.com/neg: '{"ingress": true}'
spec:
type: NodePort
ports:
- protocol: TCP
port: 80
targetPort: <CONTAINER_PORT>
selector:
app: <POD_NAME>
And my ingress.yml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: <INGRESS_NAME>
annotations:
kubernetes.io/ingress.global-static-ip-name: <CLUSTER_NAME>
networking.gke.io/managed-certificates: <CLUSTER_NAME>
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: <SERVICE_NAME>
servicePort: 80
Any ideas what is causing this failure?
With Apollo Server you need the health check to look at the correct endpoint. So add the following to your deployment.yml under the container.
livenessProbe:
initialDelaySeconds: 30
periodSeconds: 30
httpGet:
path: '/.well-known/apollo/server-health'
port: <CONTAINER_PORT>
readinessProbe:
initialDelaySeconds: 30
periodSeconds: 30
httpGet:
path: '/.well-known/apollo/server-health'
port: <CONTAINER_PORT>

Service not reachable in Kubernetes

For my research project, I need to deploy Graylog in our Kubernetes infrastructure. Graylog uses MongoDB which is deployed on the same cluster.
kubectl describe svc -n mongodb
Name: mongodb
Namespace: mongodb
Labels: app=mongodb
Annotations: Selector: app=mongodb
Type: ClusterIP
IP: 10.109.195.209
Port: 27017 27017/TCP
TargetPort: 27017/TCP
Endpoints: 10.244.2.21:27017
Session Affinity: None
Events: <none>
I use the deployment script bellow to deploy Graylog:
apiVersion: v1
kind: Service
metadata:
name: graylog3
spec:
type: NodePort
selector:
app: graylog-deploy
ports:
- name: "9000"
port: 9000
targetPort: 9000
nodePort: 30003
- name: "12201"
port: 12201
targetPort: 12201
nodePort: 30004
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: graylog-deploy
labels:
app: graylog-deploy
spec:
replicas: 1
selector:
matchLabels:
name: graylog-deploy
template:
metadata:
labels:
name: graylog-deploy
app: graylog-deploy
spec:
containers:
- name: graylog3
image: graylog/graylog:3.0
env:
- name: GRAYLOG_PASSWORD_SECRET
value: g0ABP9MJnWCjWtBX9JHFgjKAmD3wGXP3E0JQNOKlquDHnCn5689QAF8rRL66HacXLPA6fvwMY8BZoVVw0JqHnSAZorDDOdCk
- name: GRAYLOG_ROOT_PASSWORD_SHA2
value: 8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
- name: GRAYLOG_HTTP_EXTERNAL_URI
value: http://Master_IP:30003/
- name: GRAYLOG_ELASTICSEARCH_HOSTS
value: http://elasticsearch:9200
- name: GRAYLOG_MONGODB_URI
value: mongodb://mongodb:27017/graylog
ports:
- containerPort: 9000
- containerPort: 12201
Graylog is throwing an exception:
Caused by: java.net.UnknownHostException: mongodb
But when deploying it using the MongoDB IP, it runs successfully.
I am new to Kubernetes and don't know what I am doing wrong here.
Thanks.
Since your mongodb is running in a different namespace called mongodb , you need to provide the FQDN for the service in that namespace. Your graylog is in default namespace.
So to access the mongodb service in mongodb namespace change your yaml as below
- name: GRAYLOG_MONGODB_URI
value: mongodb://mongodb.mongodb:27017/graylog
Here is link that might provide more insights.

Kubernetes - Best strategy for pods with same port?

I have a Kubernetes cluster with 2 Slaves. I have 4 docker containers which all use a tomcat image and expose port 8080 and 8443. When I now put each container into a separate pod I get an issue with the ports since I only have 2 worker nodes.
What would be the best strategy for my scenario?
Current error message is: 1 PodToleratesNodeTaints, 2 PodFitsHostPorts.
Put all containers into one pod? This is my current setup (times 4)
kind: Deployment
apiVersion: apps/v1beta2
metadata:
name: myApp1
namespace: appNS
labels:
app: myApp1
spec:
replicas: 1
selector:
matchLabels:
app: myApp1
template:
metadata:
labels:
app: myApp1
spec:
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
containers:
- image: myregistry:5000/myApp1:v1
name: myApp1
ports:
- name: http-port
containerPort: 8080
- name: https-port
containerPort: 8443
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 6
---
kind: Service
apiVersion: v1
metadata:
name: myApp1-srv
namespace: appNS
labels:
version: "v1"
app: "myApp1"
spec:
type: NodePort
selector:
app: "myApp1"
ports:
- protocol: TCP
name: http-port
port: 8080
- protocol: TCP
name: https-port
port: 8443
You should not use hostNetwork unless absolutely necessary. Without host network you can have multiple pods listening on the same port number as each will have its own, dedicated network namespace.

Connection timedout when attempting to access any service in kubernetes

I've create a deployment and a service and deployed them using kubernetes, and when i tried to access them by curl, always i got a connection timed out error.
Here's my yml files:
Deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: locations-service
name: locations-service
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: locations-service
template:
metadata:
labels:
app: locations-service
spec:
containers:
- image: dropwizard:latest
imagePullPolicy: Never # just for testing!
name: locations-service
ports:
- containerPort: 8080
protocol: TCP
name: app-port
- containerPort: 8081
protocol: TCP
name: admin-port
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
Service.yml
apiVersion: v1
kind: Service
metadata:
labels:
app: locations-service
name: locations-service
namespace: default
spec:
type: NodePort
externalTrafficPolicy: Cluster
ports:
- name: "8080"
port: 8080
targetPort: 8080
protocol: TCP
- name: "8081"
port: 8081
targetPort: 8081
protocol: TCP
selector:
app: locations-service
Also i tried to add ingress routes, and tried to hit them but still the same problem occur.
Note that the application is successfully deployed, and i can check the logs from k8s dashboard
Another example, i have the following svc
kubectl describe service webapp1-svc
Name: webapp1-svc
Namespace: default
Labels: app=webapp1
Annotations: <none>
Selector: app=webapp1
Type: NodePort
IP: 10.0.0.219
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 30080/TCP
Endpoints: 172.17.0.4:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
and tried to access it using
curl -v 10.0.0.219:30080
* Rebuilt URL to: 10.0.0.219:30080/
* Trying 10.0.0.219...
* connect to 10.0.0.219 port 30080 failed: Connection timed out
* Failed to connect to 10.0.0.219 port 30080: Connection timed out
* Closing connection 0
curl: (7) Failed to connect to 10.0.0.219 port 30080: Connection timed out