I've setup a K8S-cluster in GKE and installed RabbitMQ (from the marketplace) and Istio (via Helm). I can access rabbitMQ from pods until I enable the envoy proxy to be injected into these pods, but after that the traffic will not reach rabbitMQ, and I can't figure out how to enable traffic to the rabbitmq service.
There is a service rabbitmq-rabbitmq-svc (in the rabbitmq namespace) that is of type LoadBalancer.
I've tried a simple busybox when I don't have envoy running and then I have no trouble telneting to rabbitmq (port 5672), but as soon as I try with automatic envoy injection envoy prevents the traffic.
I tried unsuccessfully to add a DestinationRule. (I've added a rule but it makes no difference)
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: rabbitmq-rabbitmq-svc
spec:
host: rabbitmq.rabbitmq.svc.cluster.local
trafficPolicy:
loadBalancer:
simple: LEAST_CONN
It seems like it should be a simple solution, but I can't figure it out... :/
UPDATE
Turns out it was a simple error in the hostname, ended up using this and it works:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: rabbitmq-rabbitmq-svc
spec:
host: rabbitmq-rabbitmq-svc.rabbitmq.svc.cluster.local
Turns out it was a simple error in the hostname, the correct one was rabbitmq-rabbitmq-svc.rabbitmq.svc.cluster.local
The only thing I needed to do to get RabbitMQ clusters to work within Istio is to annotate the RabbitMQ pods as such:
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
spec:
override:
statefulSet:
spec:
template:
metadata:
annotations:
#annotate rabbitMQ pods to only redirect traffic on ports 15672 and 5672 to Envoy proxy sidecars.
traffic.sidecar.istio.io/includeInboundPorts: "15672, 5672"
traffic.sidecar.istio.io/includeOutboundPorts: "15672, 5672"
For some reason the exclude port annotations weren't working so I just flipped it by using include port annotations. In my case, the global Istio config is controlled by another team in the company so perhaps there's a clash when trying to use the exclude port annotations.
I maybe encounter the same problem with you before. But my app can connect rabbitmq by envoy after declaring epmd with 4369 port in rabbitmq service.
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
labels:
app: rabbitmq
spec:
type: ClusterIP
ports:
- port: 5672
targetPort: 5672
name: message
- port: 4369
targetPort: 4369
name: epmd
- port: 15672
targetPort: 15672
name: management
selector:
app: rabbitmq
Related
I have the following template, with a deployment, a service and an Ingress. I ran minikube addons enable ingress locally to add an ingress controller before.
apiVersion: apps/v1
kind: Deployment
metadata:
name: fastapi
labels:
app: fastapi
spec:
replicas: 1
selector:
matchLabels:
app: fastapi
template:
metadata:
labels:
app: fastapi
spec:
containers:
- name: fastapi
image: datamastery/fastapi
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: fastapi-service
spec:
selector:
app: fastapi
type: LoadBalancer
ports:
- protocol: TCP
port: 5000
targetPort: 3000
nodePort: 30002
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
namespace: kubernetes-dashboard
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: datamastery.com
http:
paths:
- path: /
pathType: Exact
backend:
service:
name: fastapi-service
port:
number: 3000
When I run kubectl get servicesI get:
fastapi-service LoadBalancer 10.108.5.228 <pending> 5000:30002/TCP 5d22h
I my etc/hosts/ file I added the following:
10.108.5.228 datamastery.com
Normally I would expect now to be able to open my service in the browser, but nothing happens. What did I do wrong? Did I miss something in the template? Is the IP wrong? Something in the hosts file?
Thank you!
fastapi-service LoadBalancer 10.108.5.228 5000:30002/TCP 5d22h
10.108.5.228 is an address within your SDN. Only members of your SDN can reach this address, it is unlikely your workstation would have a route sending this trafic to one of your Kubernetes nodes.
<pending> means your cluster is not integrated with a cloud provider with LoadBalancer capabilities. When in doubt: you should use ClusterIP as your service type. LoadBalancer only makes sense in specific cases. While setting a nodePort as you did is also not required (would make sense with a NodePort service, which is as well useful in few use cases, though should not be used otherwise).
You did create an Ingress. If you have an Ingress Controller, you want to connect to that ip/port. The Host header would tell your ingress controller where to route this, within your SDN.
I believe what you are doing here is trying to combine two different things.
NodePort is only sufficient if you have only one node OR you really control where your service pods getting deployed. Otherwise it is not suitable to use the node IP to access services.
To overcome this issue we usually use ingress as a service proxy. Incoming traffic will be routed to the correct service pods depending on the URL and port. Ingress also manages the SSL termination. So basically this is your first "load balancer" as ingress assigned traffic to services across nodes and pods.
In production environment you deploy the ingress controller with the type: Loadbalancer in the kube-system namespace, example for Nginx-ingress:
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
namespace: kube-system
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: LoadBalancer
ports:
- port: 80
name: http
targetPort: 80
- port: 443
name: https
targetPort: 443
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
This would spin up a cloud load balancer of your provider and link it to the ingress service in your cluster. So basically now you would have a real load balancer in place balancing traffic between your nodes and ingress routes them to your services and services to your pods.
Back to your question:
In your config files you try to spin up a service with the type: LoadBalancer. This would skip the ingress part and spin up a second cloud load balancer from your provider dedicated for this single service.
You have to remove the type (and nodePort) to use default ClusterIP for your service.
apiVersion: v1
kind: Service
metadata:
name: fastapi-service
spec:
selector:
app: fastapi
ports:
- protocol: TCP
port: 3000
targetPort: 3000
In addition you have mentioned a wrong port. Your ingress object points on port 3000. You Service object on port 5000. So we also change this.
With this config your traffic on the FQDN is routed to ingress, to ClusterIP service on port 3000 to your pods.
I am following a very simple tutorial where it spawns a simple pod with an http endpoint and a service to expose that app using kubernetes.
The setup is very simple:
app-pod.yml
apiVersion: v1
kind: Pod
metadata:
name: hello-pod
labels:
app: web
spec:
containers:
- name: web-ctr
image: nigelpoulton/getting-started-k8s:1.0
ports:
- containerPort: 8080
And the nodeport service:
apiVersion: v1
kind: Service
metadata:
name: ps-nodeport
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
nodePort: 31111
protocol: TCP
selector:
app: web
The service and pod seem to be healthy:
But I can't reach the running app:
locahost:31111
Give " This site can't be reached message"
I am new to this stuff so any help will be appreciated.
In Kubernetes Kind cluster, by default, NodePort may not be bound to localhost. Please check the following resources:
https://kind.sigs.k8s.io/docs/user/quick-start/#mapping-ports-to-the-host-machine
How to use NodePort with kind?
The simplest way to access the service from localhost (like you are trying to do) would be to use
kubectl port-forward
e.g. the following command would work in your case - which forwards traffic from localhost -> ps-nodeport service
kubectl port-forward service/ps-nodeport 31111: 31111
I have a website that needs to be proxied through my web app.
Traditionally we've accomplished it via apache proxy with proxy directives.
The proxy also rewrites some of the headers and adds a couple of new ones.
Now the app has moved to OpenShift (Kubernetes) and I'm trying to avoid deploying another pod with apache.
Can I perform this header rewriting and proxying via K8 ingress? or router?
I've tried this approach, but it didn't work.
I also don't know how to get OpenShift Ingress logs, nothing seems to happen in there.
I tried using an external name, but it doesn't work:
kind: Service
metadata:
name: es3
spec:
externalName: google.com
type: ExternalName
---
kind: Route
apiVersion: route.openshift.io/v1
spec:
host: host.my-cluster-url.net
to:
kind: Service
name: es3
port:
targetPort: es3
I also tried using Endpoints , same result
apiVersion: v1
kind: Service
metadata:
name: mysvc
spec:
ports:
- name: app
port: 80
protocol: TCP
targetPort: 80
clusterIP: None
type: ClusterIP
---
apiVersion: v1
kind: Endpoints
metadata:
name: mysvc
subsets:
- addresses:
- ip: my.ip.address
ports:
- name: app
port: 80
protocol: TCP
you want to proxy non kubernetes service, right? if yes, use end point and create service from end point, I have used this with kubernetes will work with openshift too my wild guess
https://kubernetes.io/docs/concepts/services-networking/endpoint-slices/
I am new to k8s
I have a deployment file that goes below
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins-deployment
spec:
replicas: 3
selector:
matchLabels:
component: web
template:
metadata:
labels:
component: web
spec:
containers:
- name: jenkins
image: jenkins
ports:
- containerPort: 8080
- containerPort: 50000
My Service File is as following:
apiVersion: v1
kind: Service
metadata:
name: jenkins-svc
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
name: http
selector:
component: web
My Ingress File is
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: jenkins-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: jenkins.xyz.com
http:
paths:
- path: /
backend:
serviceName: jenkins-svc
servicePort: 80
I am using the nginx ingress project and my cluster is created using kubeadm with 3 nodes
nginx ingress
I first ran the mandatory command
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
when I tried hitting jenkins.xyz.com it didn't work
when I tried the command
kubectl get ing
the ing resource doesnt get an IP address assigned to it
The ingress resource is nothing but the configuration of a reverse proxy (the Ingress controller).
It is normal that the Ingress doesn't get an IP address assigned.
What you need to do is connect to your ingress controller instance(s).
In order to do so, you need to understand how they're exposed in your cluster.
Considering the YAML you claim you used to get the ingress controller running, there is no sign of exposition to the outside network.
You need at least to define a Service to expose your controller (might be a load balancer if the provider where you put your cluster supports it), you can use HostNetwork: true or a NodePort.
To use the latest option (NodePort) you could apply this YAML:
https://github.com/kubernetes/ingress-nginx/blob/master/deploy/static/provider/baremetal/service-nodeport.yaml
I suggest you read the Ingress documentation page to get a clearer idea about how all this stuff works.
https://kubernetes.io/docs/concepts/services-networking/ingress/
In order to access you local Kubernetes Cluster PODs a NodePort needs to be created. The NodePort will publish your service in every node using using its public IP and a port. Then you can access the service using any of the cluster IPs and the assigned port.
Defining a NodePort in Kubernetes:
apiVersion: v1
kind: Service
metadata:
name: nginx-service-np
labels:
name: nginx-service-np
spec:
type: NodePort
ports:
- port: 8082 # Cluster IP, i.e. http://10.103.75.9:8082
targetPort: 8080 # Application port
nodePort: 30000 # (EXTERNAL-IP VirtualBox IPs) i.e. http://192.168.50.11:30000/ http://192.168.50.12:30000/ http://192.168.50.13:30000/
protocol: TCP
name: http
selector:
app: nginx
See a full example with source code at Building a Kubernetes Cluster with Vagrant and Ansible (without Minikube).
The nginx ingress controller can be replaced also with Istio if you want to benefit from a service mesh architecture for:
Load Balance traffic, external o internal
Control failures, retries, routing
Apply limits and monitor network traffic between services
Secure communication
See Installing Istio in Kubernetes under VirtualBox (without Minikube).
I really like the kubernetes Ingress schematics. I currently run ingress-nginx controllers to route traffic into my kubernetes pods.
I would like to use this to also route traffic to 'normal' machines: ie vm's or physical nodes that are not part of my kubernetes infrastructure. Is this possible? How?
In Kubernetes you can define an externalName service in which you define a FQND to an external server.
kind: Service
apiVersion: v1
metadata:
name: my-service
namespace: prod
spec:
type: ExternalName
externalName: my.database.example.com
Then you can use my-service in your nginx rule.
You can create static service and corresponding endpoints for external services which are not k8s and then use k8s service in ingress to route traffic.
Also you see ingress doc to enable custom upstream check
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#custom-nginx-upstream-checks
In below example just change port/IP according to your need
apiVersion: v1
kind: Service
metadata:
labels:
product: external-service
name: external-service
spec:
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
---
apiVersion: v1
kind: Endpoints
metadata:
labels:
product: external-service
name: external-service
subsets:
- addresses:
- ip: x.x.x.x
- ip: x.x.x.x
- ip: x.x.x.x
ports:
- name: http
port: 80
protocol: TCP
I don't think it's possible, since ingress-nginx get pods info through watch namespace, service, endpoints, ingress resources, then redirect traffic to pods, without these resources specific to kubernetes, ingress-nginx has no way to find the ips that need loadbalance. And ingress-nginx doesn't has health-check method defined, it's up to the kubernetes builtin mechanic to check the health of the running pods.