Error {"message":"failure to get a peer from the ring-balancer"} using kong ingress - kubernetes

Getting error msg when I trying to access with public IP:
"{"message":"failure to get a peer from the ring-balancer"}"
Looks like Kong is unable to the upstream services.
I am using voting app
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: telehealth-ingress
namespace: kong
annotations:
kubernetes.io/ingress.class: "kong"
spec:
rules:
- http:
paths:
- backend:
serviceName: voting-service
servicePort: 80
service.yaml
apiVersion: v1
kind: Service
metadata:
name: voting-service
labels:
name: voting-service
app: voting-app
spec:
ports:
- targetPort: 80
port: 80
selector:
name: voting-app-pod
app: voting-app
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: voting-app-pod
labels:
name: voting-app-pod
app: voting-app
spec:
template:
metadata:
labels:
name: voting-app-pod
app: voting-app
spec:
containers:
- name: voting-app
image: dockersamples/examplevotingapp_vote
ports:
- containerPort: 80
replicas: 2
selector:
matchLabels:
app: voting-app

There could be one of many things wrong here. But essentially your ingress cannot get to your backend.
If your backend up and running?
Check backend pods are "Running"
kubectl get pods
Check backend deployment has all replicas up
kubectl get deploy
Connect to the app pod and run a localhost:80 request
kubectl exec -it <pod-name> sh
# curl http://localhost
Connect to the ingress pod and see if you can reach the service from there
kubectl exec -it <ingress-pod-name> sh
# dig voting-service (can you DNS resolve it)
# telnet voting-sevice 80
# curl http://voting-service
This issue might shed some insights as to why you can't reach the backend service. What http error code are you seeing?

The problem is resolved after deploying services and deployments in kong namespace instead of default namespace. Now I can access the application with Kong ingress public IP.
Looks like kong ingress is not able to resolve DNS with headless DNS. We need mention FQDN in ingress yaml

ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: telehealth-ingress
namespace: kong
annotations:
kubernetes.io/ingress.class: "kong"
spec:
rules:
- http:
paths:
- backend:
name: voting-service
Port:
number: 80
Try this i thing it will work

Related

not able to connect to the deployment, after installing the ingress-controller and ingress

i have deployed the nginx-ingress controller in kubernetes cluster hosted on aws using the KOPS(kubernetes operations)
as part of the ingress controller testing, i have installed a nginx-ingress controller.
to test that, i have created one deployment, service and ingress. below is the yaml code that i used.
i am able to see the backends in the ingress.
and when i run different pod on same cluster and tried to connect to the service from that i am able to reach deployment.
when i expose same deployment with NodePort. i am able to reach the deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
service
---
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx
name: nginx-deployment
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
Ingress
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: nginx.gopinallani.xyz
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-deployment
port:
number: 80
ingress-controller --> https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.0/deploy/static/provider/aws/deploy.yaml
kubernets version:
**
Client Version: v1.25.0
Kustomize Version: v4.5.7
Server Version: v1.24.4
**
DNS entry I have added in the aws Route53 Zone

Localhost kubernetes ingress not exposing services to local machine

I'm running kuberenetes in localhost, the pod is running and I can access the services when I port forwarding:
kubectl port-forward svc/my-service 8080:8080
I can get/post etc. the services in localhost.
I'm trying to use it with ingress to access it, here is the yml file:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service
port:
number: 8080
I've also installed the ingress controller. But it isn't working as expected. Anything wrong with this?
EDIT: the service that Im trying to connect with ingress:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-service
labels:
app: my-service
spec:
replicas: 1
selector:
matchLabels:
app: my-service
template:
metadata:
labels: my-service
app: my-service
spec:
containers:
- image: test/my-service:0.0.1-SNAPSHOT
name: my-service
ports:
- containerPort:8080
... other spring boot override properties
---
apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
app: my-service
spec:
type: ClusterIP
selector:
app: my-service
ports:
- name: 8080-8080
port: 8080
protocol: TCP
targetPort: 8080
service is working by itself though
EDIT:
It worked when I used https instead of http
Is ingress resource in the same namespace as the service? Can you share the manifest of service? Also, what do logs of nginx ingress-controller show and what sort of error do you face when hitting the endpoint in the browser?
Ingress's YAML file looks OK to me BTW.
I was being stupid. It worked when I used https instead of http

Exposing Redis with Ingress Nginx Controller

Hello when I use node port to expose my redis service it works fine. I am able to access it.
But if I try switch to Ingress Nginx controller it refuse to connect.. Other apps work fine with ingress.
Here is my service:
apiVersion: v1
kind: Service
metadata:
name: redis-svc
spec:
# type: NodePort
ports:
- name: http
port: 6379
targetPort: 6379
protocol: TCP
# nodePort: 30007
selector:
app: redis
And here is ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: redis-ing
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
# nginx.ingress.kubernetes.io/enable-cors: "true"
# nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS"
# nginx.ingress.kubernetes.io/cors-allow-origin: "https://test.hefest.io"
# nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
spec:
tls:
- secretName: letsencrypt-prod
hosts:
- redis-dev.domain.com
rules:
- host: redis-dev.domain.com
http:
paths:
- path: /
backend:
serviceName: redis-svc
servicePort: 6379
Any idea what can be an issue?
I am using this ingress controller: https://github.com/nginxinc/kubernetes-ingress
Redis works on 6379 which is non HTTP port(80,443). So you need to enable TCP/UDP support in nginx ingress controller. The minikube docs here shows how to do it for redis.
Update the TCP and/or UDP services configmaps
Borrowing from the tutorial on configuring TCP and UDP services with the ingress nginx controller we will need to edit the configmap which is installed by default when enabling the minikube ingress addon.
There are 2 configmaps, 1 for TCP services and 1 for UDP services. By default they look like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
---
apiVersion: v1
kind: ConfigMap
metadata:
name: udp-services
namespace: ingress-nginx
Since these configmaps are centralized and may contain configurations, it is best if we only patch them rather than completely overwrite them.
Let’s use this redis deployment as an example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-deployment
namespace: default
labels:
app: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- image: redis
imagePullPolicy: Always
name: redis
ports:
- containerPort: 6379
protocol: TCP
Create a file redis-deployment.yaml and paste the contents above. Then install the redis deployment with the following command:
kubectl apply -f redis-deployment.yaml
Next we need to create a service that can route traffic to our pods:
apiVersion: v1
kind: Service
metadata:
name: redis-service
namespace: default
spec:
selector:
app: redis
type: ClusterIP
ports:
- name: tcp-port
port: 6379
targetPort: 6379
protocol: TCP
Create a file redis-service.yaml and paste the contents above. Then install the redis service with the following command:
kubectl apply -f redis-service.yaml
To add a TCP service to the nginx ingress controller you can run the following command:
kubectl patch configmap tcp-services -n kube-system --patch '{"data":{"6379":"default/redis-service:6379"}}'
Where:
6379 : the port your service should listen to from outside the minikube virtual machine
default : the namespace that your service is installed in
redis-service : the name of the service
We can verify that our resource was patched with the following command:
kubectl get configmap tcp-services -n kube-system -o yaml
We should see something like this:
apiVersion: v1
data:
"6379": default/redis-service:6379
kind: ConfigMap
metadata:
creationTimestamp: "2019-10-01T16:19:57Z"
labels:
addonmanager.kubernetes.io/mode: EnsureExists
name: tcp-services
namespace: kube-system
resourceVersion: "2857"
selfLink: /api/v1/namespaces/kube-system/configmaps/tcp-services
uid: 4f7fac22-e467-11e9-b543-080027057910
The only value you need to validate is that there is a value under the data property that looks like this:
"6379": default/redis-service:6379
Patch the ingress-nginx-controller
There is one final step that must be done in order to obtain connectivity from the outside cluster. We need to patch our nginx controller so that it is listening on port 6379 and can route traffic to your service. To do this we need to create a patch file.
spec:
template:
spec:
containers:
- name: ingress-nginx-controller
ports:
- containerPort: 6379
hostPort: 6379
Create a file called ingress-nginx-controller-patch.yaml and paste the contents above.
Next apply the changes with the following command:
kubectl patch deployment ingress-nginx-controller --patch "$(cat ingress-nginx-controller-patch.yaml)" -n kube-system
The way I made it work is by enabling ssl-passthrough on nginx-ingress controller.
Once my nginx-ingress controller was patched, I was able to connect.
ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: redis-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-passthrough: 'true'
spec:
tls:
- hosts:
- <host_address>
secretName: <k8s_secret_name>
rules:
- host: <host_address>
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: redis-service
port:
number: 6380
python snippet to connect
import redis
r = redis.StrictRedis(host='<host_address>',
port=443, db=0, ssl=True,
ssl_ca_certs='server.pem')
print(r.ping())
I do use redistls in my case.

Minikube ingress is not responding

I am trying to redirect traffic to my Cluster Deployment object. My Deployment object is a node image which depends on a SQL server image.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: simptekapi-api-service
servicePort: 5000
MY Cluster IP service ====>
apiVersion: v1
kind: Service
metadata:
name: simptekapi-api-service
spec:
selector:
component: api
type: ClusterIP
ports:
- port: 5000
targetPort: 5000
My deployment Object ===>
apiVersion: apps/v1
kind: Deployment
metadata:
name: simptekapi-api-deployment
spec:
replicas: 3
selector:
matchLabels:
component: api
template:
metadata:
labels:
component: api
spec:
containers:
- name: simptekapi-api
image: nayan2/simptekapi
ports:
- containerPort: 5000
My node image is basically a Test API Project. I am sharing the SignIn url (http://Minikube_ip/api/v1/auth/SignIn)
Seems like everything ok to me. As am still literally a newbie with Kubernetes. It's hard for me to figure out what I am doing wrong.
While I have tried to reach my pods through Node-Port, everything was perfect. I don't know why I am unable to react it out through ingress.
kubectl get ingress example-ingress ==> Result
AND kubectl get pods -n ingress-nginx ==> Result

Migrate to istio from nginx ingress

I've simple single page golang web application, I'm trying to migrate to istio.
My prod setup (via nginx ingress):
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: goapp
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
spec:
tls:
- hosts:
- mycustomapp.mycustomapp.com
secretName: go-tls
rules:
- host: mycustomapp.mycustomapp.com
http:
paths:
- path: /
backend:
serviceName: mycustomapp
servicePort: 80
And I'm trying to build at least http configuration for istio
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: goapp
annotations:
kubernetes.io/ingress.class: istio
spec:
rules:
- host: mycustomapp.mycustomapp.com
http:
paths:
- path: /
backend:
serviceName: mycustomapp
servicePort: 80
But I always get 404 from istio lb on clean cluster with istio 0.7.1 only installed. Samples like bookinfo and httpbin works well
Application yaml:
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: mycustomapp
name: mycustomapp
spec:
replicas: 1
selector:
matchLabels:
k8s-app: mycustomapp
template:
metadata:
labels:
k8s-app: mycustomapp
spec:
containers:
- name: mycustomapp
image: xxxx.azurecr.io/mycustomapp:999
ports:
- containerPort: 80
protocol: TCP
imagePullSecrets:
- name: xxxx
serviceAccountName: mycustomapp
---
kind: Service
apiVersion: v1
metadata:
annotations:
prometheus.io/scrape: 'true'
labels:
k8s-app: mycustomapp
name: mycustomapp
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
selector:
k8s-app: mycustomapp
To get rid of the 404 error in your case, it should be enough to add the correct port name to the service and deployment YAML files, and add istio sidecar to the deployment YAML file. Then you should redeploy all changed files.
Perhaps you may need to add label app: mycustomapp to the service and deployment, but I'm not sure is it required or optional.
Here is example of the service.yaml file with the correct port name (more about the port names you can read here):
kind: Service
apiVersion: v1
metadata:
annotations:
prometheus.io/scrape: 'true'
labels:
app: mycustomapp
k8s-app: mycustomapp
name: mycustomapp
spec:
type: ClusterIP
ports:
- name: http-80
port: 80
targetPort: 80
selector:
k8s-app: mycustomapp
Ensure you have also the correct port name in your deployment file.
You can add the istio sidecar to the container manually, following these steps:
download and unpack latest istio release, suitable for your OS from https://github.com/istio/istio/releases
Change directory to istio package. For example, if the package is istio-0.7
cd istio-0.7
Create inject config:
kubectl create -f install/kubernetes/istio-sidecar-injector-configmap-release.yaml --dry-run -o=jsonpath='{.data.config}' > inject-config.yaml
Create mesh config:
kubectl -n istio-system get configmap istio -o=jsonpath='{.data.mesh}' > mesh-config.yaml
Add istio sidecar container to your deployment:
bin/istioctl kube-inject \
--injectConfigFile inject-config.yaml \
--meshConfigFile mesh-config.yaml \
--filename path/to/original/deployment.yaml \
--output deployment-injected.yaml
Deploy new deployment:
kubectl apply -f deployment-injected.yaml
If you want to have automatic sidecar injection, follow this manual.
You can check if the sidecar has been injected into the deployment:
$ kubectl get deployment mycustomapp -o wide
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
mycustomapp 1 1 1 1 3h mycustomapp,istio-proxy nginx:1.7.9,docker.io/istio/proxy:0.7.1 k8s-app=mycustomapp