Exposing Redis with Ingress Nginx Controller - kubernetes

Hello when I use node port to expose my redis service it works fine. I am able to access it.
But if I try switch to Ingress Nginx controller it refuse to connect.. Other apps work fine with ingress.
Here is my service:
apiVersion: v1
kind: Service
metadata:
name: redis-svc
spec:
# type: NodePort
ports:
- name: http
port: 6379
targetPort: 6379
protocol: TCP
# nodePort: 30007
selector:
app: redis
And here is ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: redis-ing
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
# nginx.ingress.kubernetes.io/enable-cors: "true"
# nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS"
# nginx.ingress.kubernetes.io/cors-allow-origin: "https://test.hefest.io"
# nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
spec:
tls:
- secretName: letsencrypt-prod
hosts:
- redis-dev.domain.com
rules:
- host: redis-dev.domain.com
http:
paths:
- path: /
backend:
serviceName: redis-svc
servicePort: 6379
Any idea what can be an issue?
I am using this ingress controller: https://github.com/nginxinc/kubernetes-ingress

Redis works on 6379 which is non HTTP port(80,443). So you need to enable TCP/UDP support in nginx ingress controller. The minikube docs here shows how to do it for redis.
Update the TCP and/or UDP services configmaps
Borrowing from the tutorial on configuring TCP and UDP services with the ingress nginx controller we will need to edit the configmap which is installed by default when enabling the minikube ingress addon.
There are 2 configmaps, 1 for TCP services and 1 for UDP services. By default they look like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
---
apiVersion: v1
kind: ConfigMap
metadata:
name: udp-services
namespace: ingress-nginx
Since these configmaps are centralized and may contain configurations, it is best if we only patch them rather than completely overwrite them.
Let’s use this redis deployment as an example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-deployment
namespace: default
labels:
app: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- image: redis
imagePullPolicy: Always
name: redis
ports:
- containerPort: 6379
protocol: TCP
Create a file redis-deployment.yaml and paste the contents above. Then install the redis deployment with the following command:
kubectl apply -f redis-deployment.yaml
Next we need to create a service that can route traffic to our pods:
apiVersion: v1
kind: Service
metadata:
name: redis-service
namespace: default
spec:
selector:
app: redis
type: ClusterIP
ports:
- name: tcp-port
port: 6379
targetPort: 6379
protocol: TCP
Create a file redis-service.yaml and paste the contents above. Then install the redis service with the following command:
kubectl apply -f redis-service.yaml
To add a TCP service to the nginx ingress controller you can run the following command:
kubectl patch configmap tcp-services -n kube-system --patch '{"data":{"6379":"default/redis-service:6379"}}'
Where:
6379 : the port your service should listen to from outside the minikube virtual machine
default : the namespace that your service is installed in
redis-service : the name of the service
We can verify that our resource was patched with the following command:
kubectl get configmap tcp-services -n kube-system -o yaml
We should see something like this:
apiVersion: v1
data:
"6379": default/redis-service:6379
kind: ConfigMap
metadata:
creationTimestamp: "2019-10-01T16:19:57Z"
labels:
addonmanager.kubernetes.io/mode: EnsureExists
name: tcp-services
namespace: kube-system
resourceVersion: "2857"
selfLink: /api/v1/namespaces/kube-system/configmaps/tcp-services
uid: 4f7fac22-e467-11e9-b543-080027057910
The only value you need to validate is that there is a value under the data property that looks like this:
"6379": default/redis-service:6379
Patch the ingress-nginx-controller
There is one final step that must be done in order to obtain connectivity from the outside cluster. We need to patch our nginx controller so that it is listening on port 6379 and can route traffic to your service. To do this we need to create a patch file.
spec:
template:
spec:
containers:
- name: ingress-nginx-controller
ports:
- containerPort: 6379
hostPort: 6379
Create a file called ingress-nginx-controller-patch.yaml and paste the contents above.
Next apply the changes with the following command:
kubectl patch deployment ingress-nginx-controller --patch "$(cat ingress-nginx-controller-patch.yaml)" -n kube-system

The way I made it work is by enabling ssl-passthrough on nginx-ingress controller.
Once my nginx-ingress controller was patched, I was able to connect.
ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: redis-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-passthrough: 'true'
spec:
tls:
- hosts:
- <host_address>
secretName: <k8s_secret_name>
rules:
- host: <host_address>
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: redis-service
port:
number: 6380
python snippet to connect
import redis
r = redis.StrictRedis(host='<host_address>',
port=443, db=0, ssl=True,
ssl_ca_certs='server.pem')
print(r.ping())
I do use redistls in my case.

Related

How do I make a forward-proxy server on k8s and ALB(or NLB)?

I created forward proxy server on EKS pods behind ALB(created by AWS Load Balancer Controller). All pod can take a response through 8118 port through ALB.
The resources like pod and ingress looked good to me. Then I tried if the proxy server work well with curl -Lx k8s-proxy-sample-domain.ap-uswest-1.elb.amazonaws.com:18118 ipinfo.io
Normally, I get random ip address from ipinfo.io. But it didn't.... So, I also did port-forad. Like this:
kubectl port-forward specifi-pod 8118:8118
Then I re-try redirect access on my host address.
curl -Lx localhost:8118 ipinfo.io
In this case, it went well. I cannot catch the reason. What's the difference between THROUGH ALB and port-forward. Should I use NLB for some reason? Or some misconfigure?
My environement
k8s version: v1.18.2
node type: fargate
Manifest
Here is my manifest.
---
apiVersion: v1
kind: Namespace
metadata:
name: tor-proxy
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: tor-proxy
name: tor-proxy-deployment
spec:
selector:
matchLabels:
app.kubernetes.io/name: tor-proxy
replicas: 5
template:
metadata:
labels:
app.kubernetes.io/name: tor-proxy
spec:
containers:
- image: dperson/torproxy
imagePullPolicy: Always
name: tor-proxy
ports:
- containerPort: 8118
---
apiVersion: v1
kind: Service
metadata:
labels:
name: tor-proxy
name: tor-proxy-service
namespace: tor-proxy
spec:
ports:
- port: 18118
targetPort: 8118
protocol: TCP
type: NodePort
selector:
app.kubernetes.io/name: tor-proxy
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: tor-proxy
name: tor-proxy-ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 18118}]'
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: tor-proxy-service
servicePort: 18118
Use NLB not ALB, because it pass the client IP toward a target site through proxy server.

Error {"message":"failure to get a peer from the ring-balancer"} using kong ingress

Getting error msg when I trying to access with public IP:
"{"message":"failure to get a peer from the ring-balancer"}"
Looks like Kong is unable to the upstream services.
I am using voting app
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: telehealth-ingress
namespace: kong
annotations:
kubernetes.io/ingress.class: "kong"
spec:
rules:
- http:
paths:
- backend:
serviceName: voting-service
servicePort: 80
service.yaml
apiVersion: v1
kind: Service
metadata:
name: voting-service
labels:
name: voting-service
app: voting-app
spec:
ports:
- targetPort: 80
port: 80
selector:
name: voting-app-pod
app: voting-app
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: voting-app-pod
labels:
name: voting-app-pod
app: voting-app
spec:
template:
metadata:
labels:
name: voting-app-pod
app: voting-app
spec:
containers:
- name: voting-app
image: dockersamples/examplevotingapp_vote
ports:
- containerPort: 80
replicas: 2
selector:
matchLabels:
app: voting-app
There could be one of many things wrong here. But essentially your ingress cannot get to your backend.
If your backend up and running?
Check backend pods are "Running"
kubectl get pods
Check backend deployment has all replicas up
kubectl get deploy
Connect to the app pod and run a localhost:80 request
kubectl exec -it <pod-name> sh
# curl http://localhost
Connect to the ingress pod and see if you can reach the service from there
kubectl exec -it <ingress-pod-name> sh
# dig voting-service (can you DNS resolve it)
# telnet voting-sevice 80
# curl http://voting-service
This issue might shed some insights as to why you can't reach the backend service. What http error code are you seeing?
The problem is resolved after deploying services and deployments in kong namespace instead of default namespace. Now I can access the application with Kong ingress public IP.
Looks like kong ingress is not able to resolve DNS with headless DNS. We need mention FQDN in ingress yaml
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: telehealth-ingress
namespace: kong
annotations:
kubernetes.io/ingress.class: "kong"
spec:
rules:
- http:
paths:
- backend:
name: voting-service
Port:
number: 80
Try this i thing it will work

Kubernetes nginx ingress controller throws error when trying to obtain endpoints for service

I am trying to set micro-services on Kubernetes on Google Cloud Platform. I've created a deployment, clusterIp and ingress configuration files.
First after creating a cluster, I run this command to install nginx ingress.
helm install my-nginx stable/nginx-ingress --set rbac.create=true
I use helm v3.
Then I apply deployment and clusterIp configurations.
deployment and clusterIp configurations:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-production-deployment
spec:
replicas: 2
selector:
matchLabels:
component: app-production
template:
metadata:
labels:
component: app-production
spec:
containers:
- name: app-production
image: eu.gcr.io/my-project/app:1.0
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: app-production-cluser-ip-service
spec:
type: ClusterIP
selector:
component: app-production
ports:
- port: 80
targetPort: 80
protocol: TCP
My ingress config is:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: app.example.com
http:
paths:
- path: /
backend:
serviceName: app-production-cluster-ip-service
servicePort: 80
I get this error from Google Cloud Platform logs withing ingress controller:
Error obtaining Endpoints for Service "default/app-production-cluster-ip-service": no object matching key "default/app-production-cluster-ip-service" in local store
But when I do kubectl get endpoints command the output is this:
NAME ENDPOINTS AGE
app-production-cluser-ip-service 10.60.0.12:80,10.60.1.13:80 17m
I am really not sure what I'm doing wrong.
The service name mentioned in the ingress not matching. Please recreate a service and check
apiVersion: v1
kind: Service
metadata:
name: app-production-cluster-ip-service
spec:
type: ClusterIP
selector:
component: app-production
ports:
- port: 80
targetPort: 80
protocol: TCP

Ingress configuration for k8s in different namespaces

I need to configure Ingress Nginx on azure k8s, and my question is if is possible to have ingress configured in one namespace et. ingress-nginx and some serivces in other namespace eg. resources?
My files looks like so:
# ingress-nginx.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
spec:
replicas: 3
selector:
matchLabels:
app: ingress-nginx
template:
metadata:
labels:
app: ingress-nginx
annotations:
prometheus.io/port: '10254'
prometheus.io/scrape: 'true'
spec:
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.12.0
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --annotations-prefix=nginx.ingress.kubernetes.io
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
# configmap.yaml
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress-nginx
---
# default-backend.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: default-http-backend
labels:
app: default-http-backend
namespace: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app: default-http-backend
template:
metadata:
labels:
app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
# Any image is permissible as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: gcr.io/google_containers/defaultbackend:1.4
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
---
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
namespace: ingress-nginx
labels:
app: default-http-backend
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: default-http-backend
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
# app-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app-ingress
namespace: ingress-nginx
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- api-sand.fake.com
rules:
- host: api-sand.fake.com
http:
paths:
- backend:
serviceName: api-sand
servicePort: 80
path: /
And then I have some app running in the resources namespace, and problem is that I am getting the following error
error obtaining service endpoints: error getting service resources/api-sand from the cache: service resources/api-sand was not found
If I deploy api-sand in the same namespace where ingress is then this service works fine.
I would like to simplify the answer a bit further for those who are reletively new to Kubernetes and its ingress options in particular.
There are 2 separate things that need to be present for ingress to work:
Ingress Controller(essentially a separate Pod/Deployment along with
a Service that can be used to utilize routing and proxying. Based on
nginx container for example);
Ingress rules(a separate Kubernetes
resourse with kind: Ingress. Will only take effect if Ingress
Controller is already deployed)
Now, Ingress Controller can be deployed in any namespace and is, in fact, usually deployed in a namespace separate from your app services. It can out-of-the-box see Ingress rules in all namespaces in the cluster and will pick them up.
The Ingress rules, however, must reside in the namespace where the app that they configure reside.
There are some workarounds for that, but this is the most common approach.
Instead of creating the ingress app-ingress in ingress-nginx namespace you should create it in the namespace where you have the service api-sandand the pod.
Alternatively there is way to achieve ingress in one namespace and service in another namespace via externalName.Checkout Kubernetes Cross Namespace Ingress Network
Here is an example referred from here.
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
type: ExternalName
externalName: test-service.namespacename.svc.cluster.local
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: example.com
http:
paths:
- path: /
backend:
serviceName: my-service
servicePort: 80
It's possible actually, you can define ingress and a service with ExternalName type in namespace A, while the ExternalName points to DNS of the service in namespace B. For further details, please refer to this answer: https://stackoverflow.com/a/51899301/2995449
Outside traffic comes through ingress controller service that is responsible for routing the traffic based on the defined routing rules or what we call ingress rules in k8s world.
In other words, ingress resources are just routing rules (think of it in away that's similar to DNS records) so when you define an ingress resource you just defined a rule for ingress controller to work on and route traffic based on such defined rules.
Solution:
Since Ingress are nothing but routing rules, you could define such rules anywhere in the cluster (in any namespace) and controller should pick them up as it monitors creation of such resources and react accordingly.
Here's how to create ingress easily using kubectl
kubectl create ingress <name> -n namespaceName --rule="host/prefix=serviceName:portNumber"
Note: Add --dry-run=client -oyaml to generate yaml manifest file
Or you may create a service of type ExternalName in the same namespace where you have defined your ingress. such external service can point to any URL (a service that lives outside namespace or even k8s cluster)
Here's an example that shows how to create an ExternalName service using kubectl:
kubectl create service externalname ingress-ns -n namespaceName --external-name=serviceName.namespace.svc.cluster.local --tcp=80:80 --dry-run=client -oyaml
this should generate something similar to the following:
kind: Service
apiVersion: v1
metadata:
name: nginx
namespace: ingress-ns
spec:
type: ExternalName
externalName: serviceName.namespace.svc.cluster.local #or any external svc
ports:
- port: 80 #specify the port of service you want to expose
targetPort: 80 #port of external service
As described above, create an ingress as below:
kubectl create ingress <name> -n namespaceName --rule="host/prefix=serviceName:portNumber"
Note: Add --dry-run=client -oyaml to generate yaml manifest file
The way worked for me is creating ingress for each namespace
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
namespace: production
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: api.youtube.com
http:
paths:
- pathType: Prefix
path: "/api/users"
backend:
service:
name: youtube-srv
port:
number: 3000
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
namespace: development
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: dev.youtube.com
http:
paths:
- pathType: Prefix
path: "/api/users"
backend:
service:
name: youtube-srv
port:
number: 3000
There is a way to configure your default backend per ingress resource although the documentation says it is usually configured at the ingress controller level.
For example:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myingress
namespace: myns
spec:
defaultBackend:
service:
name: default-http-backend
port:
number: 80
...
Here default-http-backend must be defined in the same namespace as the ingress resource.

Migrate to istio from nginx ingress

I've simple single page golang web application, I'm trying to migrate to istio.
My prod setup (via nginx ingress):
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: goapp
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
spec:
tls:
- hosts:
- mycustomapp.mycustomapp.com
secretName: go-tls
rules:
- host: mycustomapp.mycustomapp.com
http:
paths:
- path: /
backend:
serviceName: mycustomapp
servicePort: 80
And I'm trying to build at least http configuration for istio
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: goapp
annotations:
kubernetes.io/ingress.class: istio
spec:
rules:
- host: mycustomapp.mycustomapp.com
http:
paths:
- path: /
backend:
serviceName: mycustomapp
servicePort: 80
But I always get 404 from istio lb on clean cluster with istio 0.7.1 only installed. Samples like bookinfo and httpbin works well
Application yaml:
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: mycustomapp
name: mycustomapp
spec:
replicas: 1
selector:
matchLabels:
k8s-app: mycustomapp
template:
metadata:
labels:
k8s-app: mycustomapp
spec:
containers:
- name: mycustomapp
image: xxxx.azurecr.io/mycustomapp:999
ports:
- containerPort: 80
protocol: TCP
imagePullSecrets:
- name: xxxx
serviceAccountName: mycustomapp
---
kind: Service
apiVersion: v1
metadata:
annotations:
prometheus.io/scrape: 'true'
labels:
k8s-app: mycustomapp
name: mycustomapp
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
selector:
k8s-app: mycustomapp
To get rid of the 404 error in your case, it should be enough to add the correct port name to the service and deployment YAML files, and add istio sidecar to the deployment YAML file. Then you should redeploy all changed files.
Perhaps you may need to add label app: mycustomapp to the service and deployment, but I'm not sure is it required or optional.
Here is example of the service.yaml file with the correct port name (more about the port names you can read here):
kind: Service
apiVersion: v1
metadata:
annotations:
prometheus.io/scrape: 'true'
labels:
app: mycustomapp
k8s-app: mycustomapp
name: mycustomapp
spec:
type: ClusterIP
ports:
- name: http-80
port: 80
targetPort: 80
selector:
k8s-app: mycustomapp
Ensure you have also the correct port name in your deployment file.
You can add the istio sidecar to the container manually, following these steps:
download and unpack latest istio release, suitable for your OS from https://github.com/istio/istio/releases
Change directory to istio package. For example, if the package is istio-0.7
cd istio-0.7
Create inject config:
kubectl create -f install/kubernetes/istio-sidecar-injector-configmap-release.yaml --dry-run -o=jsonpath='{.data.config}' > inject-config.yaml
Create mesh config:
kubectl -n istio-system get configmap istio -o=jsonpath='{.data.mesh}' > mesh-config.yaml
Add istio sidecar container to your deployment:
bin/istioctl kube-inject \
--injectConfigFile inject-config.yaml \
--meshConfigFile mesh-config.yaml \
--filename path/to/original/deployment.yaml \
--output deployment-injected.yaml
Deploy new deployment:
kubectl apply -f deployment-injected.yaml
If you want to have automatic sidecar injection, follow this manual.
You can check if the sidecar has been injected into the deployment:
$ kubectl get deployment mycustomapp -o wide
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
mycustomapp 1 1 1 1 3h mycustomapp,istio-proxy nginx:1.7.9,docker.io/istio/proxy:0.7.1 k8s-app=mycustomapp