Expose Redis as Openshift route - kubernetes

I deployed a Redis standalone in Openshift (CRC) using bitnami helm chart (https://github.com/bitnami/charts/tree/main/bitnami/redis)
I used these parameters:
helm repo add my-repo https://charts.bitnami.com/bitnami
helm upgrade --install redis-ms my-repo/redis \
--set master.podSecurityContext.enabled=false \
--set master.containerSecurityContext.enabled=false \
--set auth.enabled=false \
--set image.debug=true \
--set architecture=standalone
I can see a Redis Master node pod "Ready to accept connections"
Then I create a Route to expose redis outside the cluster:
kind: Route
apiVersion: route.openshift.io/v1
metadata:
name: redis
spec:
host: redis-ms.apps-crc.testing
to:
kind: Service
name: redis-ms-master
port:
targetPort: tcp-redis
wildcardPolicy: None
But when I try to connect to "redis-ms.apps-crc.testing:80" I got:
"Unknown reply: H"
While if I use oc port-forward --namespace redis-ms svc/redis-ms-master 6379:6379 and then I connect to "localhost:6379" it works

OpenShift Routes are limited to HTTP(S) traffic, due to how the router sends traffic with SNI.
An Ingress Controller is configured to accept external requests and proxy them based on the configured routes. This is limited to HTTP, HTTPS using SNI, and TLS using SNI, which is sufficient for web applications and services that work over TLS with SNI.
If you need to expose non-HTTP traffic, like a database or Redis instance in your case, you can expose this Service directly as a LoadBalancer type Service. This would look something like below for your Redis Service
apiVersion: v1
kind: Service
metadata:
name: redis-ms-master
spec:
ports:
- name: tcp-redis
port: 6379
targetPort: redis
type: LoadBalancer
selector:
name: app.kubernetes.io/component: master
Additionally, it actually looks like that specific helm chart supports setting this as a configuration option, master.service.type.

Related

azure AKS internal load balancer not responding requests

I have an AKS cluster, as well as a separate VM. AKS cluster and the VM are in the same VNET (as well as subnet).
I deployed a echo server with the following yaml, I'm able to directly curl the pod with vnet ip from the VM. But when trying that with load balancer, nothing returns. Really not sure what I'm missing. Any help is appreciated.
apiVersion: v1
kind: Service
metadata:
name: echo-server
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: echo-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo-deployment
spec:
replicas: 1
selector:
matchLabels:
app: echo-server
template:
metadata:
labels:
app: echo-server
spec:
containers:
- name: echo-server
image: ealen/echo-server
ports:
- name: http
containerPort: 8080
The following pictures demonstrate the situation
I'm expecting that when curl the vnet ip from load balancer, to receive the same response as I did directly curling the pod ip
Can you check your internal-loadbalancer health probe.
"For Kubernetes 1.24+ the services of type LoadBalancer with appProtocol HTTP/HTTPS will switch to use HTTP/HTTPS as health probe protocol (while before v1.24.0 it uses TCP). And / will be used as the default health probe request path. If your service doesn’t respond 200 for /, please ensure you're setting the service annotation service.beta.kubernetes.io/port_{port}_health-probe_request-path or service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path (applies to all ports) with the correct request path to avoid service breakage."
(ref: https://github.com/Azure/AKS/releases/tag/2022-09-11)
If you are using nginx-ingress controller, try adding the same as mentioned in doc:
(https://learn.microsoft.com/en-us/azure/aks/ingress-basic?tabs=azure-cli#basic-configuration)
helm upgrade ingress-nginx ingress-nginx/ingress-nginx \
--reuse-values \
--namespace <NAMESPACE> \
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz
Have you checked whether the pod's IP is correctly mapped as an endpoint to the service? You can check it using,
k describe svc echo-server -n test | grep Endpoints
If not please check label and selectors with your actual deployment (rather the resources put in the description).
If it is correctly mapped, are you sure that the VM you are using (_#tester) is under the correct subnet which should include the iLB IP;10.240.0.226 as well?
Found the solution, the only thing I need to do is to add the following to the Service declaration:
externalTrafficPolicy: 'Local'
Full yaml as below
apiVersion: v1
kind: Service
metadata:
name: echo-server
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
externalTrafficPolicy: 'Local'
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: echo-server
previously it was set to 'Cluster'.
Just got off with azure support, seems like a specific bug on this (it happens with newer version of the AKS), posting the related link here: https://github.com/kubernetes/ingress-nginx/issues/8501

How to make an app accessable over the internet using Ingress or metalLB on bare metal

I'm running a k8s cluter with one control and one worker node on bare metal ubuntu machines (IPs: 123.223.149.27 and 22.36.211.68).
I deployed a sample app:
kubectl create deployment nginx --image=nginx
kubectl expose deploy nginx --port 80 --target-port 80 --type NodePort
Running kubectl get services shows me:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d23h
nginx NodePort 10.100.107.184 <none> 80:30799/TCP 5h48m
and I can access this appllication inside of the cluster by
kubectl run alpine --image=alpine --restart=Never --rm -it -- wget -O- 10.100.107.184:80
But now I want to access the sample app outside of the cluster in the internet via http://123.223.149.27 or later within the domain mywebsite.com as the DNS of the domain is pointing to 123.223.149.27.
I applied:
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml
with this config map:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: production-public-ips
protocol: layer2
addresses:
- 123.223.149.27/32
- 22.36.211.68/32
and this ingress:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.1/deploy/static/provider/cloud/deploy.yaml
For me it is not clear, if I have to use ingress (then I would use ingress-nginx) and metalLB and how to configure both. I read a lot of service types like loadBalancer and NodePorts, but I think I didn't understand the concept correctly. I even didn't understand if I have to use ingress-nginx OR metalLB OR both of them. I only understand that if I'm using type LoadBalancer I have to use a loadbalancer as I am on bare metal, so in that case I have to use metalLB.
It would be very helpful for my understanding, if someone could explain on this example app how to make this accessable over the internet.
Since, you have a running service inside your Kubernetes cluster, you can expose via an ingress-controller which is a reverse-proxy that routes traffics from outside to your dedicated service(s) inside the cluster,
We'll use for example ingress-nginx,See https://github.com/kubernetes/ingress-nginx
These are the requirements you'll need for reaching your service at mywebsite.com:
Have access to DNS records of your domain mywebsite.com
Install ingress-nginx in your Cluster, See https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx
Install it using helm
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install ingress-nginx ingress-nginx/ingress-nginx \
--namespace $NAMESPACE \
--set controller.replicaCount=2 \
--set controller.nodeSelector."kubernetes\.io/os"=linux \
--set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux \
--set defaultBackend.nodeSelector."kubernetes\.io/os"=linux \
--version $VERSIONS
You can look for versions compatibles with your Kubernetes cluster version using:
helm search repo ingress-nginx/ingress-nginx --versions
When installation is well finished, you should see ingress-controller service that holds an $EXTERNAL-IP
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.0.XXX.XXX XX.XXX.XXX.XX 80:30578/TCP,443:31874/TCP 548d
Now that you have a running ingress controller, you need to create a ingress object that manages external access to your $Service,
See https://kubernetes.io/docs/concepts/services-networking/ingress/
For example:
#ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
#cert-manager.io/cluster-issuer: letsencrypt-prod
#nginx.ingress.kubernetes.io/ssl-redirect: 'true'
spec:
tls:
- hosts:
- mywebsite.com
#secretName: cert-wildcard # get it from certificate.yaml
rules:
- host: mywebsite.com
http:
paths:
- path: / #/?(.*) #/(.*)
pathType: Prefix
backend:
service:
name: $Service
port:
number: 80
ingressClassName: nginx
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: nginx
spec:
controller: k8s.io/ingress-nginx
As you saw in the ingress file, commented lines refere to the usage of an SSL certificate generated by cert-manager from LetsEncrypt, this can be achieved by another process which is described here https://cert-manager.io/docs/configuration/acme/,
it depends mainly on your cloud provider (Cloudflare, Azure, ...)
Finally, In your DNS zone, Add a DNS record which maps mywebsite.com to $EXTERNAL-IP, wait a few minutes and you should be able to access your service under mywebsite.com

Kubernetes MLflow Service Pod Connection

I have deployed a build of mlflow to a pod in my kubernetes cluster. I'm able to port forward to the mlflow ui, and now I'm attempting to test it. To do this, I am running the following test on a jupyter notebook that is running on another pod in the same cluster.
import mlflow
print("Setting Tracking Server")
tracking_uri = "http://mlflow-tracking-server.default.svc.cluster.local:5000"
mlflow.set_tracking_uri(tracking_uri)
print("Logging Artifact")
mlflow.log_artifact('/home/test/mlflow-example-artifact.png')
print("DONE")
When I run this though, I get
ConnectionError: HTTPConnectionPool(host='mlflow-tracking-server.default.svc.cluster.local', port=5000): Max retries exceeded with url: /api/2.0/mlflow/runs/get? (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object>: Failed to establish a new connection: [Errno 111] Connection refused'))
The way I have deployed the mlflow pod is shown below in the yaml and docker:
Yaml:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mlflow-tracking-server
namespace: default
spec:
selector:
matchLabels:
app: mlflow-tracking-server
replicas: 1
template:
metadata:
labels:
app: mlflow-tracking-server
spec:
containers:
- name: mlflow-tracking-server
image: <ECR_IMAGE>
ports:
- containerPort: 5000
env:
- name: AWS_MLFLOW_BUCKET
value: <S3_BUCKET>
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: aws-secret
key: AWS_ACCESS_KEY_ID
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: aws-secret
key: AWS_SECRET_ACCESS_KEY
---
apiVersion: v1
kind: Service
metadata:
name: mlflow-tracking-server
namespace: default
labels:
app: mlflow-tracking-server
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app: mlflow-tracking-server
ports:
- name: http
port: 5000
targetPort: http
While the dockerfile calls a script that executes the mlflow server command: mlflow server --default-artifact-root ${AWS_MLFLOW_BUCKET} --host 0.0.0.0 --port 5000, I cannot connect to the service I have created using that mlflow pod.
I have tried using the tracking uri http://mlflow-tracking-server.default.svc.cluster.local:5000, I've tried using the service EXTERNAL-IP:5000, but everything I tried cannot connect and log using the service. Is there anything that I have missed in deploying my mlflow server pod to my kubernetes cluster?
Your mlflow-tracking-server service should have ClusterIP type, not LoadBalancer.
Both pods are inside the same Kubernetes cluster, therefore, there is no reason to use LoadBalancer Service type.
For some parts of your application (for example, frontends) you may want to expose a Service onto an external IP address, that’s outside of your cluster.
Kubernetes ServiceTypes allow you to specify what kind of Service you want. The default is ClusterIP.
Type values and their behaviors are:
ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this
value makes the Service only reachable from within the cluster. This
is the default ServiceType.
NodePort: Exposes the Service on each Node’s IP at a static port (the NodePort). A > ClusterIP Service, to which the NodePort Service routes, is automatically created. You’ll > be able to contact the NodePort Service, from outside the cluster, by
requesting :.
LoadBalancer: Exposes the Service
externally using a cloud provider’s load balancer. NodePort and
ClusterIP Services, to which the external load balancer routes, are
automatically created.
ExternalName: Maps the Service to the contents
of the externalName field (e.g. foo.bar.example.com), by returning a
CNAME record with its value. No proxying of any kind is set up.
kubernetes.io
So to oversimplify this, you have no ways to access the mlflow uri from jupyterhub pod. What I would do here is check the proxies for the jupyterhub pod. If you dont have .svc in NO_PROXY you have to add it. A detailed reason is that you are accessing the Internal .svc mlflow url as if it is on open internet. But actually your mlflow uri is only accessible inside the cluster. If adding .svc doesnt work for no proxy doesnt work we can take a deeper look at that. The ways to check the proxies is by taking ‘ kubectl get po $JHPODNAME -n $ JHNamespace -o yaml’

Make rabbitmq cluster publicly accesible

I am using this helm chart to configure rabbitmq on k8s cluster:
https://github.com/helm/charts/tree/master/stable/rabbitmq
How can I make cluster accessible thru public endpoint? Currently, I have a cluster with below configurations. I am able to access the management portal by given hostname (publicly endpoint, which is fine). But, when I checked inside the management portal cluster can be accessible by internal IP and/or hostname which is: rabbit#rabbitmq-0.rabbitmq-headless.default.svc.cluster.local and rabbit#<private_ip>. I want to make cluster public so all other services which are outside of VNET can connect to it.
helm install stable/rabbitmq --name rabbitmq \
--set rabbitmq.username=xxx \
--set rabbitmq.password=xxx \
--set rabbitmq.erlangCookie=secretcookie \
--set rbacEnabled=true \
--set ingress.enabled=true \
--set ingress.hostName=rabbitmq.xxx.com \
--set ingress.annotations."kubernetes\.io/ingress\.class"="nginx" \
--set resources.limits.memory="256Mi" \
--set resources.limits.cpu="100m"
I was not tried with Helm but I was build and deploy to Kubernetes directly from .yaml configure file. So I only followed the Template of Helm
For publish you RabbitMQ service out of cluster
1, You need to have an external IP:
If you using Google Cloud, run these commands:
gcloud compute addresses create rabbitmq-service-ip --region asia-southeast1
gcloud compute addresses describe rabbitmq-service-ip --region asia-southeast1
>address: 35.240.xxx.xxx
Change rabbitmq-service-ip to the name you want, and change the region to your own.
2, Configure Helm parameter
service.type=LoadBalancer
service.loadBalancerSourceRanges=35.240.xxx.xxx/32 # IP address you got from gcloud
service.port=5672
3, Deploy and try to telnet to your RabbitMQ service
telnet 35.240.xxx.xxx 5672
Trying 35.240.xxx.xxx...
Connected to 149.185.xxx.xxx.bc.googleusercontent.com.
Escape character is '^]'.
Gotcha! It's worked
FYI:
Here is base template if you want to create .yaml and deploy without Helm
service.yaml
---
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
labels:
name: rabbitmq
namespace: smart-office
spec:
type: LoadBalancer
loadBalancerIP: 35.xxx.xxx.xx
ports:
# the port that this service should serve on
- port: 5672
name: rabbitmq
targetPort: 5672
nodePort: 32672
selector:
name: rabbitmq
deployment.yaml
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: rabbitmq
labels:
name: rabbitmq
namespace: smart-office
spec:
replicas: 1
template:
metadata:
labels:
name: rabbitmq
annotations:
prometheus.io/scrape: "false"
spec:
containers:
- name: rabbitmq
image: rabbitmq:3.6.8-management
ports:
- containerPort: 5672
name: rabbitmq
securityContext:
capabilities:
drop:
- all
add:
- CHOWN
- SETGID
- SETUID
- DAC_OVERRIDE
readOnlyRootFilesystem: true
- name: rabbitmq-exporter
image: kbudde/rabbitmq-exporter
ports:
- containerPort: 9090
name: exporter
nodeSelector:
beta.kubernetes.io/os: linux
Hope this help!
From your Helm values passed, I see that you have configured your RabbitMQ service with an Nginx Ingress.
You should create a DNS record with your ingress.hostName (rabbitmq.xxx.com) directed to the ingress IP (if GCP) or CNAME (if AWS) of your nginx-ingress load-balancer. That DNS hostname (rabbitmq.xx.com) is your public endpoint to access your RabbitMQ service.
Ensure that your nginx-ingress controller is running in your cluster in order for the ingresses to work. If you are unfamiliar with ingresses:
- Official Ingress Docs
- Nginx Ingress installation guide
- Nginx Ingress helm chart
Hope this helps!

How to add a static IP to nginx-ingress installed via helm

I would like to create an nginx-ingress that I can link to a reserved IP address. The main reason being, that I want to minimize manual steps. Currently, the infrastructure is automatically set-up with Terraform, but I cannot get nginx-ingress to use the reserved IP with it. I already have nginx-ingress working, but it creates its own IP address.
According to the nginx-ingress site (https://kubernetes.github.io/ingress-nginx/examples/static-ip/), this should be possible. First, one should create a load-balancer service:
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-lb
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
loadBalancerIP: 34.123.12.123
ports:
- port: 80
name: http
targetPort: 80
- port: 443
name: https
targetPort: 443
selector:
# Selects nginx-ingress-controller pods
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
However, then one can update the IP via nginx-ingress-controller.yaml file with the --publish-service flag. However, I install this via helm:
helm install stable/nginx-ingress --name my-nginx --set rbac.create=true
How can I link the publish service to nginx-ingress-lb in my helm installation (or upgrade).
Assuming your cloud provider supports LBs with static IPs (AWS, for example, will give you a CNAME instead of an IP):
You will have to set it as a tag as the following. Once you do this, you can set your ingress annotation: kubernetes.io/ingress.class: nginx and your ingress will automatically get the same IP address.
helm install stable/nginx-ingress --set controller.service.loadBalancerIP=XXXX,rbac.create=true
The original answer is a bit outdated, so here's a working example for 2022.
Note: You cannot edit an existing ingress-nginx load balancer service, but you can pass the external IP you want it to use when installing it. Keep in mind, you need to have that external IP set up ahead of time in your cloud environment.
Here is the command that worked for me when performing a helm installation:
helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx --create-namespace \
--debug \
--set controller.service.loadBalancerIP=<YOUR_STATIC_IP>
More info:
ingress-nginx docs
ingress-nginx values that can be overridden with --set