upstream connection error with istio mesh - kubernetes

I have Kubernetes cluster, istio service mesh and MetalLB. Default namespace with istio-injection enabled.
I am trying to install Gravitee API gateway in my K8.
gravitee.yaml (deployment)
apiVersion: apps/v1
kind: Deployment
metadata:
name: gravitee-gateway-test
#labels:
#app: ratings
# version: v1
spec:
replicas: 1
selector:
matchLabels:
app: gravitee-gateway
#version: v1
template:
metadata:
labels:
app: gravitee-gateway
#version: v1
spec:
containers:
- name: gravitee-container
image: graviteeio/gateway:latest
ports:
- containerPort: 8082
gravity-service.yaml
apiVersion: v1
kind: Service
metadata:
name: gravitee-gateway-service
#labels:
#app: reviews
#service: reviews
spec:
ports:
- port: 9080
name: http
protocol: TCP
selector:
app: gravitee-gateway
type: LoadBalancer
Both ran with kubectl apply -f
MetalLB assigns a new IP address for Gravitee service 123.456.789.11 and port is 9080, when I logged to 123.456.789.11:9080, I get following error :
upstream connect error or disconnect/reset before headers. reset reason: connection failure
What am I missing here?
Referred :
503 upstream issue : istio
Random upstream error

Related

How does gateway connect other services in Kubernetes?

I am attaching the image of my application flow. Here the Gateway and other services are created using NestJS. The request for any API comes through the gateway.
The Gateway-pod and API-pod communicate using TCP protocol.
After deployment the Gateway is not able to discover any API pods.
I am attaching the YAML image file also for both Gateway & Pods.
Please do let me know what mistake I am doing in the YAML file.
**APPLICATION DIAGRAM**
Gateway YAML
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: roushan
name: gateway-deployment
spec:
replicas: 1
selector:
matchLabels:
app: roushan-app
template:
metadata:
labels:
app: roushan-app
spec:
containers:
- name: gateway-container
image: nest-api-gateway:v8
ports:
- containerPort: 1000
apiVersion: v1
kind: Service
metadata:
namespace: roushan
name: gateway-svc
spec:
selector:
app: roushan-app
ports:
- name: gateway-svc-container
protocol: TCP
port: 80
targetPort: 1000
type: LoadBalancer
Pod YAML
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: roushan
name: pod1-deployment
spec:
replicas: 1
selector:
matchLabels:
app: roushan-app
template:
metadata:
labels:
app: roushan-app
spec:
containers:
- name: pod1-container
image: nest-api-pod1:v2
ports:
- containerPort: 4000
apiVersion: v1
kind: Service
metadata:
namespace: roushan
name: pod1-srv
spec:
selector:
app: roushan-app
ports:
- name: pod1-svc-container
protocol: TCP
port: 80
targetPort: 4000
the gateway should be able to access the services by their DNS name. for example pod1-srv.svc.cluster.local, if this does not work you may need to look at the Kubernetes DNS setup.
I have not used AKS, they may use a different domain name for the cluster other than svc.cluster.local
YAML Points
Ideally, you should be keeping the different selectors across the deployment.
You are using the same selectors for both deployments. Gateway and application deployment.
Service will forward the traffic to deployment based on selectors and labels, this might redirect the service-2 request to POD-1.
Networking
You gateway service(Pods) connect to internal service by just service-name like : pod1-srv if in same namespaces.
if gateway and application in different namespaces you have to call each other like http://<servicename>.<namespace>.svc.cluster.local

Use a common container registry in k8s deployments in a federated cluster

Setup
I have a federated k8s cluster that each cluster has master and workers.
In a federation, each cluster has a different domain for accessing image registry. (e.g. myregistry-1, myregistry-2).
In other words, each cluster has its own registry.
Question
I don't want to change domain for each cluster. Basically, I would like to create a common endpoint that matches to each inner registry, which is internal to that cluster.
Example: Below deployment on all clusters.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: harbor.default:5000/nginx:1.14.2
ports:
- containerPort: 80
I tried to implement "Services without selectors" and created an endpoint and updated deployment.yaml but didn't work.
harbor.yaml
apiVersion: v1
kind: Service
metadata:
name: harbor-service
spec:
ports:
- protocol: TCP
port: 5000
targetPort: 5000
harbor-endpoint.yaml
apiVersion: v1
kind: Endpoints
metadata:
name: harbor-service
subsets:
- addresses:
- ip: <INTERNAL_IP_OF_REGISTRY>
ports:
- port: 5000

tunnel for service target port empty kubernetes and can't access pod from local browser

apiVersion: apps/v1
kind: Deployment
metadata:
name: identityold-deployment
spec:
selector:
matchLabels:
app: identityold
replicas: 1
template:
metadata:
labels:
app: identityold
spec:
containers:
- name: identityold
image: <image name from docker hub>
ports:
- containerPort: 8081
---
apiVersion: v1
kind: Service
metadata:
labels:
app: identityold
name: identityold-svc
namespace: default
spec:
type: NodePort # use LoadBalancer as type here
ports:
- port: 80
targetPort: 8081
nodePort: 30036
selector:
app: identityold
The above code is my deployment YAML file.
and cant access from the browser the service
Exposing a service in minikube cluster is little bit different than in normal kubernetes cluster.
Please follow this guide from kubernetes documentation and use minikube service command in order to expose it properly.

How to access a service from another namespace in kubernetes

I have the zipkin deployment and service below as you can see zipkin is located under monitoring namespace the, i have an env variable called ZIPKIN_URL in each of my pods which are running under default namespace, this varibale takes this URL http://zipkin:9411/api/v2/spans but since zipkin is running in another namespace i tried this :
http://zipkin.monitoring.svc.cluster.local:9411/api/v2/spans
i also tried this format :
http://zipkin.monitoring:9411/api/v2/spans
but when i check the logs of my pods, i see connection refused exception
when i exec into one of my pods and try curl http://zipkin.tools.svc.cluster.local:9411/api/v2/spans
its shows me Mandatory parameter is missing: serviceNameroot
Here is zipkin resource :
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: zipkin
namespace: monitoring
spec:
template:
metadata:
labels:
app: zipkin
spec:
containers:
- name: zipkin
image: openzipkin/zipkin:2.19.3
ports:
- containerPort: 9411
---
apiVersion: v1
kind: Service
metadata:
name: zipkin
namespace: monitoring
spec:
selector:
app: zipkin
ports:
- name: http
port: 9411
protocol: TCP
type: ClusterIP
What you have is correct, your issue is likely not DNS. You can confirm by doing just a DNS lookup and comparing that to the IP of the Service.

Kubernetes Communication between services

For learning purposes, i have two services in a cluster on google cloud:
API Service with the following k8 config:
deployment.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp-api
labels:
app: myapp-api
spec:
replicas: 1
template:
metadata:
labels:
app: myapp-api
spec:
containers:
- image: gcr.io/prefab-basis-213412/myapp-api:0.0.1
name: myapp-api
ports:
- containerPort: 3000
service.yaml
kind: Service
apiVersion: v1
metadata:
name: myapp-api
spec:
selector:
app: myapp-api
ports:
- protocol: TCP
port: 80
targetPort: 3000
And a second service, called frontend, that's publicly exposed:
deployment.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myappfront-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: myappfront
spec:
containers:
- image: gcr.io/prefab-basis-213412/myappfront:0.0.11
name: myappfront-deployment
ports:
- containerPort: 3000
service.yaml
apiVersion: v1
kind: Service
metadata:
name: myappfront-service
spec:
type: LoadBalancer
selector:
app: myappfront
ports:
- port: 80
targetPort: 3000
The front service is basically a nodejs app that only does a rest call to the api service like so axios.get('http://myapp-api').
The issue is that the call is failing and it's unable to find the requested endpoint. Is there any additional config that i'm currently missing for the API service to be discovered?
P.S. Both services are running and I can connect to them by proxying from localhost.
Since you're able to hit the services when proxying it sounds like you've already gone through most of the debugging steps for in-cluster issues (https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/). That suggests the problem might not be within the cluster. Something to watch out for with frontends is where the http call takes place. It could be in the server with node but given you're seeing this issue I'd suggest that it's in the browser. (If you see can see the call in the browser console then it is.) If the frontend's call is being made in the browser then it doesn't have access to the Kubernetes dns and the cluster-internal service name won't resolve. To handle this you could make the backend service LoadBalancer and pass the external name into the frontend.