Make rabbitmq cluster publicly accesible - kubernetes

I am using this helm chart to configure rabbitmq on k8s cluster:
https://github.com/helm/charts/tree/master/stable/rabbitmq
How can I make cluster accessible thru public endpoint? Currently, I have a cluster with below configurations. I am able to access the management portal by given hostname (publicly endpoint, which is fine). But, when I checked inside the management portal cluster can be accessible by internal IP and/or hostname which is: rabbit#rabbitmq-0.rabbitmq-headless.default.svc.cluster.local and rabbit#<private_ip>. I want to make cluster public so all other services which are outside of VNET can connect to it.
helm install stable/rabbitmq --name rabbitmq \
--set rabbitmq.username=xxx \
--set rabbitmq.password=xxx \
--set rabbitmq.erlangCookie=secretcookie \
--set rbacEnabled=true \
--set ingress.enabled=true \
--set ingress.hostName=rabbitmq.xxx.com \
--set ingress.annotations."kubernetes\.io/ingress\.class"="nginx" \
--set resources.limits.memory="256Mi" \
--set resources.limits.cpu="100m"

I was not tried with Helm but I was build and deploy to Kubernetes directly from .yaml configure file. So I only followed the Template of Helm
For publish you RabbitMQ service out of cluster
1, You need to have an external IP:
If you using Google Cloud, run these commands:
gcloud compute addresses create rabbitmq-service-ip --region asia-southeast1
gcloud compute addresses describe rabbitmq-service-ip --region asia-southeast1
>address: 35.240.xxx.xxx
Change rabbitmq-service-ip to the name you want, and change the region to your own.
2, Configure Helm parameter
service.type=LoadBalancer
service.loadBalancerSourceRanges=35.240.xxx.xxx/32 # IP address you got from gcloud
service.port=5672
3, Deploy and try to telnet to your RabbitMQ service
telnet 35.240.xxx.xxx 5672
Trying 35.240.xxx.xxx...
Connected to 149.185.xxx.xxx.bc.googleusercontent.com.
Escape character is '^]'.
Gotcha! It's worked
FYI:
Here is base template if you want to create .yaml and deploy without Helm
service.yaml
---
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
labels:
name: rabbitmq
namespace: smart-office
spec:
type: LoadBalancer
loadBalancerIP: 35.xxx.xxx.xx
ports:
# the port that this service should serve on
- port: 5672
name: rabbitmq
targetPort: 5672
nodePort: 32672
selector:
name: rabbitmq
deployment.yaml
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: rabbitmq
labels:
name: rabbitmq
namespace: smart-office
spec:
replicas: 1
template:
metadata:
labels:
name: rabbitmq
annotations:
prometheus.io/scrape: "false"
spec:
containers:
- name: rabbitmq
image: rabbitmq:3.6.8-management
ports:
- containerPort: 5672
name: rabbitmq
securityContext:
capabilities:
drop:
- all
add:
- CHOWN
- SETGID
- SETUID
- DAC_OVERRIDE
readOnlyRootFilesystem: true
- name: rabbitmq-exporter
image: kbudde/rabbitmq-exporter
ports:
- containerPort: 9090
name: exporter
nodeSelector:
beta.kubernetes.io/os: linux
Hope this help!

From your Helm values passed, I see that you have configured your RabbitMQ service with an Nginx Ingress.
You should create a DNS record with your ingress.hostName (rabbitmq.xxx.com) directed to the ingress IP (if GCP) or CNAME (if AWS) of your nginx-ingress load-balancer. That DNS hostname (rabbitmq.xx.com) is your public endpoint to access your RabbitMQ service.
Ensure that your nginx-ingress controller is running in your cluster in order for the ingresses to work. If you are unfamiliar with ingresses:
- Official Ingress Docs
- Nginx Ingress installation guide
- Nginx Ingress helm chart
Hope this helps!

Related

Expose Redis as Openshift route

I deployed a Redis standalone in Openshift (CRC) using bitnami helm chart (https://github.com/bitnami/charts/tree/main/bitnami/redis)
I used these parameters:
helm repo add my-repo https://charts.bitnami.com/bitnami
helm upgrade --install redis-ms my-repo/redis \
--set master.podSecurityContext.enabled=false \
--set master.containerSecurityContext.enabled=false \
--set auth.enabled=false \
--set image.debug=true \
--set architecture=standalone
I can see a Redis Master node pod "Ready to accept connections"
Then I create a Route to expose redis outside the cluster:
kind: Route
apiVersion: route.openshift.io/v1
metadata:
name: redis
spec:
host: redis-ms.apps-crc.testing
to:
kind: Service
name: redis-ms-master
port:
targetPort: tcp-redis
wildcardPolicy: None
But when I try to connect to "redis-ms.apps-crc.testing:80" I got:
"Unknown reply: H"
While if I use oc port-forward --namespace redis-ms svc/redis-ms-master 6379:6379 and then I connect to "localhost:6379" it works
OpenShift Routes are limited to HTTP(S) traffic, due to how the router sends traffic with SNI.
An Ingress Controller is configured to accept external requests and proxy them based on the configured routes. This is limited to HTTP, HTTPS using SNI, and TLS using SNI, which is sufficient for web applications and services that work over TLS with SNI.
If you need to expose non-HTTP traffic, like a database or Redis instance in your case, you can expose this Service directly as a LoadBalancer type Service. This would look something like below for your Redis Service
apiVersion: v1
kind: Service
metadata:
name: redis-ms-master
spec:
ports:
- name: tcp-redis
port: 6379
targetPort: redis
type: LoadBalancer
selector:
name: app.kubernetes.io/component: master
Additionally, it actually looks like that specific helm chart supports setting this as a configuration option, master.service.type.

azure AKS internal load balancer not responding requests

I have an AKS cluster, as well as a separate VM. AKS cluster and the VM are in the same VNET (as well as subnet).
I deployed a echo server with the following yaml, I'm able to directly curl the pod with vnet ip from the VM. But when trying that with load balancer, nothing returns. Really not sure what I'm missing. Any help is appreciated.
apiVersion: v1
kind: Service
metadata:
name: echo-server
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: echo-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo-deployment
spec:
replicas: 1
selector:
matchLabels:
app: echo-server
template:
metadata:
labels:
app: echo-server
spec:
containers:
- name: echo-server
image: ealen/echo-server
ports:
- name: http
containerPort: 8080
The following pictures demonstrate the situation
I'm expecting that when curl the vnet ip from load balancer, to receive the same response as I did directly curling the pod ip
Can you check your internal-loadbalancer health probe.
"For Kubernetes 1.24+ the services of type LoadBalancer with appProtocol HTTP/HTTPS will switch to use HTTP/HTTPS as health probe protocol (while before v1.24.0 it uses TCP). And / will be used as the default health probe request path. If your service doesn’t respond 200 for /, please ensure you're setting the service annotation service.beta.kubernetes.io/port_{port}_health-probe_request-path or service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path (applies to all ports) with the correct request path to avoid service breakage."
(ref: https://github.com/Azure/AKS/releases/tag/2022-09-11)
If you are using nginx-ingress controller, try adding the same as mentioned in doc:
(https://learn.microsoft.com/en-us/azure/aks/ingress-basic?tabs=azure-cli#basic-configuration)
helm upgrade ingress-nginx ingress-nginx/ingress-nginx \
--reuse-values \
--namespace <NAMESPACE> \
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz
Have you checked whether the pod's IP is correctly mapped as an endpoint to the service? You can check it using,
k describe svc echo-server -n test | grep Endpoints
If not please check label and selectors with your actual deployment (rather the resources put in the description).
If it is correctly mapped, are you sure that the VM you are using (_#tester) is under the correct subnet which should include the iLB IP;10.240.0.226 as well?
Found the solution, the only thing I need to do is to add the following to the Service declaration:
externalTrafficPolicy: 'Local'
Full yaml as below
apiVersion: v1
kind: Service
metadata:
name: echo-server
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
externalTrafficPolicy: 'Local'
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: echo-server
previously it was set to 'Cluster'.
Just got off with azure support, seems like a specific bug on this (it happens with newer version of the AKS), posting the related link here: https://github.com/kubernetes/ingress-nginx/issues/8501

Access NodePort Service Outside Kubeadm K8S Cluster

I have two Ubuntu VMs created using Oracle Virtual Box on my Windows 11 laptop. I setup a k8s cluster using kubeadm with these two Ubuntu VMs, one of them is a master node and an another one is a worker node. Both the nodes are running with Ubuntu 20.04.3 LTS and docker://20.10.7. I deployed my spring boot app into the k8s cluster and exposed a node port service for my spring boot app with port 30000, but I am not really sure on how to access my node port service on the internet outside my cluster. Could you please help me with this issue?
Following are the IP address of my nodes in k8s cluster - master [192.168.254.94] and worker [192.168.254.95]. I tried with the following urls but none of them worked
http://192.168.254.94:30000/swagger-ui.html
http://192.168.254.95:30000/swagger-ui.html
These above urls throwing message which says refused to connect
http://192.168.9.13:30000/swagger-ui.html
http://192.168.9.14:30000/swagger-ui.html
These above urls says that the site cannot be reached
Below is the content of my application.yaml which I used for deploying the spring boot app and its corresponding service
apiVersion: apps/v1
kind: Deployment
metadata:
name: dealer-engine
spec:
replicas: 1
selector:
matchLabels:
app: dealer-engine
template:
metadata:
labels:
app: dealer-engine
spec:
containers:
- name: dealer-engine
image: moviepopcorn/dealer_engine:0.0.1
ports:
- containerPort: 9090
env:
- name: MONGO_URL
value: mongodb://mongo-service:27017/mazda
imagePullPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: dealer-engine
spec:
type: NodePort
selector:
app: dealer-engine
ports:
- port: 9091
targetPort: 9090
nodePort: 30000
externalIPs:
- 10.0.0.12
I am a beginner in k8s so please help me on how I can access my node port service outside my k8s cluster.
I created a new simple Springboot application which returns "Hello world!!!" back to the user when the following endpoint "/helloWorld" is invoked. I deployed this spring boot app into my k8s cluster using the below yaml configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: moviepopcorn/hello_world:0.0.1
ports:
- containerPort: 9091
imagePullPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: hello-world
spec:
type: NodePort
selector:
app: hello-world
ports:
- port: 9091
targetPort: 9091
nodePort: 30001
After successful deployment, I am able to access the helloWorld endpoint using the following url <K8S_MASTER_NODE_IP>:<NODE_PORT (30001)>.
Thank you all for your answers and inputs. Very much appreciated.
Have you install any CNI plugin like flannel?
If yes, Check your CIDR setting here
kubectl get node k8s-master -o yaml | grep podCIDR:
kubectl get configmap -n kube-system kube-flannel-cfg -o yaml | grep '"Network":'
Basically yes, CNI is must. flannel is the most simple one.
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network
You must deploy a Container Network Interface (CNI) based Pod network add-on so that your Pods can communicate with each other. Cluster DNS (CoreDNS) will not start up before a network is installed.
Download cni pulgin on every server.
# download cni plugin
wget https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-amd64-v0.9.1.tgz
sudo mkdir -p /opt/cni/bin
sudo tar -xzvf cni-plugins-linux-amd64-v0.9.1.tgz -C /opt/cni/bin
Reset you cluster
kubeadm reset
Init your cluster with cidr setting must same as flannel config, default 10.244.0.0
kubeadm init --pod-network-cidr=10.244.0.0/16
Apply cni plugin.
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

Grafana with kubeflow

I am trying to integrate Grafana with my kubeflow in order to monitor my model.
I have no clue from where to start as I am not able to find anything in the documentation.
Can someone help?
To run Grafana with kubeflow, follow the steps:
create namespace
kubectl create namespace knative-monitoring
setup monitoring components
kubectl apply --filename
https://github.com/knative/serving/releases/download/v0.13.0/monitoring-metrics-prometheus.yaml
Launch grafana board via port forwarding
kubectl port-forward --namespace knative-monitoring $(kubectl get pod
--namespace knative-monitoring --selector="app=grafana" --output jsonpath='{.items[0].metadata.name}') 8080:3000
Access the grafana dashboard on http://localhost:8080.
It depends on your configuration. I had a MiniKF instance running on an EC2 VM and needed to specify the address was 0.0.0.0 for the port-forwarding method to work.
kubectl port-forward --namespace knative-monitoring \
$(kubectl get pod --namespace knative-monitoring \
--selector="app=grafana" --output jsonpath='{.items[0].metadata.name}') \
--address 0.0.0.0 8080:3000
Then you should be able to access the grafana dashboard at http://{your-kf-ip}:8080
You can also expose it via istio, using this virtualservice:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana-vs
namespace: kubeflow
spec:
gateways:
- kubeflow-gateway
hosts:
- '*'
http:
- match:
- method:
regex: GET|POST
uri:
prefix: /istio/grafana/
rewrite:
uri: /
route:
- destination:
host: grafana.istio-system.svc.cluster.local
port:
number: 3000
So if you're visiting your kubeflow dashboard usually via https://kubeflow.example.com, having this exposed through kubeflow-gateway will allow you to access it via https://kubeflow.example.com/istio/grafana/
If you're not using Istio's grafana but Knative's, you can change the destination accordingly.
You might also need to change the root url of grafana via an env variable in grafana's deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana
namespace: istio-system
spec:
template:
containers:
- env:
- name: GF_SERVER_ROOT_URL
value: https://kubeflow.example.com/istio/grafana

Unable to login to Postgres inside Kubernetes cluster from the outside

I want to simply login to a postgres db from outside my K8 cluster. I'm created the following config:
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: database-persistent-volume-claim
containers:
- name: postgres
image: postgres
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
subPath: postgres
env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PORT
value: '5432'
- name: POSTGRES_DB
value: postgres
- name: POSTGRES_PASSWORD
value: password
- name: POSTGRES_HOST_AUTH_METHOD
value: trust
---
apiVersion: v1
kind: Service
metadata:
name: postgres-srv
spec:
selector:
app: postgres
type: NodePort
ports:
- name: postgres
protocol: TCP
port: 5432
targetPort: 5432
Config Map:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
5432: "default/postgres-srv:5432"
I've checked kubectl get services and attempted to use the endpoint and the cluster-ip. Neith of these worked.
psql "postgresql://postgres:password#[ip]:5432/postgres"
The pod is running and the logs say everything is ready. Anything I'm missing here? I'm running the cluster in digital ocean.
edit:
I want to be able to access the DB from my host. (sub.domain.com) I've bounced the deployments and still can't get in. The only config I've targeted is what is shown above. I do have an A record for my domain and can access my other exposed pods via my ingress nginx service
You can expose TCP and UDP services with ingress-nginx configuration.
For example using GKE with ingress-nginx, nfs-server-provisioner and the bitnami/postgresql helm charts:
kubectl create secret generic -n default postgresql \
--from-literal=postgresql-password=$(openssl rand -base64 32) \
--from-literal=postgresql-replication-password=$(openssl rand -base64 32)
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install -n default postgres bitnami/postgresql \
--set global.storageClass=nfs-client \
--set existingSecret=postgresql
Patch the ingress-nginx tcp-services ConfigMap:
kubectl patch cm -n ingress-nginx tcp-services -p '{"data": {"5432": "default/postgres-postgresql:5432"}}'
Update the controllers Service for the proxied port (i.e. kubectl edit svc -n ingress-nginx ingress-nginx):
- name: postgres
port: 5432
protocol: TCP
targetPort: 5432
Note: you may have to update the existing ingress-nginx controller deployments args (i.e. kubectl edit deployments.apps -n ingress-nginx nginx-ingress-controller) to include --tcp-services-configmap=ingress-nginx/tcp-services and bounce the ingress-nginx controller if you edit the deployment spec (i.e. kubectl scale deployment -n ingress-nginx --replicas=0 && kubectl scale deployment -n ingress-nginx --replicas=3).
Test the connection:
export PGPASSWORD=$(kubectl get secrets -n default postgresql -o jsonpath={.data.postgresql-password} |base64 -d)
docker run --rm -it \
-e PGPASSWORD=${PGPASSWORD} \
--entrypoint psql \
--network host \
postgres:13-alpine -U postgres -d postgres -h example.com
Note: I manually created an A record in Google Cloud DNS to resolve the hostname to the clusters external IP.
Update: in addition to creating the ingress-nginx config, installing the bitnami/postgresql chart etc. it was necessary to Disable "Proxy Protocol" on the Load Balancer to get the connections working for a deployment in DigitalOcean (postgres will LOG: invalid length of startup packet otherwise):