How to use wildcard in nginx loadbalancersourceranges under bitnami nginx helm-chart - kubernetes-helm

I am using bitnami helm to deploy nginx nginx-ingress-controller on k8s cluster using ArgoCD.
It works well, and I am limiting the access via loadBalancerSourceRanges under the values.yaml:
loadBalancerSourceRanges:
- 205.231.13.209/32,
- 203.537.72.62/32,
etc..
I have a new client that want the entire range of the organization to be open, so it needs to be all addresses with
213.137.72.1, 213.137.72.2 , 213.137.72.3 , 213.137.72.4 and so on..
How can I solve this one ? is it possible to use wildcard here somehow?

Related

How to solve via Rancher a Kubernetes Ingress Controller Fake Certificate error

I installed Rancher 2.6 on top of a kubernetes cluster. As cert-manager version I used 1.7.1.
helm install cert-manager jetstack/cert-manager --namespace cert-manager --version v1.7.1 --set installCRDs=true --create-namespace
helm install rancher rancher-latest/rancher \
--namespace cattle-system \
--set hostname=MYDOMAIN.org \
--set bootstrapPassword=MYPASSWORD \
--set ingress.tls.source=letsEncrypt \ //<--- I use letsEncrypt
--set letsEncrypt.email=cert#mydomain.org \
--set letsEncrypt.ingress.class=nginx
After the installation was done, Rancher was successfully deployed on https:\mydomain.org.
LetsEncrypt SSL worked here fine. With Rancher I created a new RKE2 Cluster for my Apps.
So, I created a new Deployment for testing
"rancher/hello-world:latest"
3x Replicas
Direct call of the nodeport ip adress with port, worked. http://XXXXXX:32599/
At this point I want to use a https subdomain hello.mydomain.org.
After study of documentation my approach was to create a new Ingress. I did it like you see on the following picture.
After creation of a new Ingress, I checked the section Ingresses of my hello world deployment. That new Ingress is now available there.
My expectation was that now I can go to **https://**hello.mydomain.org. But https doesn't work here, instead I got:
NET::ERR_CERT_AUTHORITY_INVALID
Subject: Kubernetes Ingress Controller Fake Certificate
Issuer: Kubernetes Ingress Controller Fake Certificate
Expires on: 03.09.2023
Current date: 03.09.2022
Where did I make a mistake? How to use LetsEncrypt for my deployments?
The fake certificate usually implies that the ingress controller is serving a default backend instead of what you expect it to. While a particular Ingress resource might be served over http as expected, the controller doesn't consider it servable over https. Most likely explanation is that the certificate is missing and ingress host isn't configured for https. When you installed rancher you only configured Rancher's own ingress. You need to setup certificates for each Ingress resource separately.
You didn't mention which ingress-controller you are using. With LE or other ACME based certificate issuers you'll usually need a Certificate Controller to manage certificate generation and renewal. I'd recommend cert-manager. There is an excellent tutorial for setting up LE, cert-manager and nginx-ingress-controller. If you're using Traefik, it is capable of generating LE certificates by itself, but the support is only partial in kubernetes environments (ie. no high availability), so your best bet is to use cert-manager even with that.
Even if or once you have set them up, cert-manager doesn't automatically issue certificates for every Ingress but only to those it is requested to. You need annotations for that.
With cert-manager, once you have set up the Issuer/ClusterIssuer and annotation, your ingress resource should look something like this (you can check the YAML from rancher):
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt-staging
name: hello-ingress
namespace: hello-ns
spec:
rules:
- host: hello.example.com
http:
paths:
- backend:
service:
name: hello-service
port:
number: 80
path: /
pathType: Prefix
tls:
- hosts:
- hello.example.com
secretName: hello-letsencrypt-cert
You might need to edit YAML directly and add spec.tls.secretName. If all is well, once you apply metadata.annotations and have set up spec.tls.hosts and spec.tls.secretName, the verification should happen soon and the ingress address should change to https://hello.example.com.
As a side note, I've experienced this issue also when the Ingress is behind a reverse proxy, such as HAproxy, and that reverse proxy (or Ingress) is not properly set up to use proxy protocol. You don't mention using one, but I'll write it just for the record.
If these steps don't solve your problem, you should check kubectl describe on the ingress and kubectl logs on the nginx-controller pods and see if anything stands out.
EDIT: I jumped to a conclusion, so I restructured this answer to also note the possibly of missing certificate manager altogether.

ingress-nginx k8 settings configuring controller.customPorts

As per: https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-helm/
I'm trying to install ingress-nginx with custom ports, but it does not expose those ports when I pass in the controller.customPorts parameter. I think I'm not passing it in the right format. The documentation says
A list of custom ports to expose on the NGINX ingress controller pod. Follows the conventional Kubernetes yaml syntax for container ports.
Can anyone explain to me what that format should be?
Assuming they mean what shows up in Pod definitions:
- port: 1234
name: alan

Kubernetes - Expose Website using nginx-ingress

I have a website running inside a kubernetes cluster.
I can access it localy, but want to make it available over the internet. (I have a registered domain), but the external IP keeps pending
I worked with this instruction: https://dev.to/peterj/expose-a-kubernetes-service-on-your-own-custom-domain-52dd
This is the code for the service and ingress
kind: Service
apiVersion: v1
metadata:
name: app-service
spec:
selector:
app: website
ports:
- name: http
protocol: TCP
port: 3000
targetPort: 3000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: www.carina.bernrieder.de
http:
paths:
- path: /
backend:
serviceName: app-service
servicePort: 3000
So I'm using helm to install the nginx-controller, but after that Kubectl get all the external IP of the nginx controller keeps pending.
EXTERNAL-IP is expected to be pending in a non cloud environment such as minikube. You should be able to access the application using curl www.carina.bernrieder.de
Here is guide on using nginx ingress to expose an application on minikube
As #Arghya Sadhu mentioned, in local environment it is the expected behaviour. Maybe it will be easier to understand when you look a bit more deeply on how it works in cloud environments. Without going into details, if you apply an Ingress resource on GKE, EKS or AKS, a few more things happen "under the hood". A loadbalancer with an external IP is automatically created so your ingress can use it to forward external traffic to Pods deployed on your kubernetes cluster.
Minikube doesn't have such capabilities as it cannot make any call to any API for additional infrastructure resources to be created, as it happens on cloud environments.
But let's start from the beginning. You didn't mention in your question anything about your external IP or domain configuration. If you don't have an external static IP to which your domain has been redirected, it have no chances to work anyway.
As to this point, I won't fully agree:
You should be able to access the application using curl
www.carina.bernrieder.de
Yes, you will be able to access it via your domain (actually via any domain that you don't even need to own) provided you add the following entry in your /etc/hosts file so DNS won't be used and it will be resolved based on this locally defined mapping:
172.17.0.15 www.carina.bernrieder.de
As you can read here:
Note: If you are running Minikube locally, use minikube ip to get the
external IP. The IP address displayed within the ingress list will be
the internal IP.
But keep in mind that both those IPs will be private IPs. The one, that is displayed within the ingress list will be internal cluster ip and the external one will be extarnal only from your Minikube cluster perspective. It will be still the IP in your local network assigned to your Minikube vm.
And as you said in your question you want to make it available over the Internet. As you can see it has no chances to work without additional configuration.
Another important thing. You didn't mention where your Minikube is actually installed, so I guess you set it up on your local computer and most probably you're behind NAT router. If this is your case, it won't be so easy to expose it on a public internet. You will need to configure proper port forwarding rules on your router and of course you need a static IP or you need to configure dynamic DNS to be able to access your computer on the Internet via your dynami public IP.
Minikube was designed mainly for playing locally with kubernetes and not for production environments. Of course you can use it to run your small app, but then you may think about installing it on a VM in a cloud environment or some sort of VPS server.

Nginx Ingress Kube

I'm confused about nginx ingress with Kubernetes. I've been able to use it with "basic nginx auth" (unable to do so with oauth2 yet).
I've installed via helm:
helm install stable/nginx-ingress --name app-name --set rbac.create=true
This creates two services, an nginx-ingress-controller and an nginx-ingress-backend.
When I create an ingress, this ingress is targeted towards one and only one nginx-ingress-controller, but I have no idea how:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: tomcat
annotations:
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: basic-auth
nginx.ingress.kubernetes.io/auth-realm: "Authentication Required - foo"
nginx.ingress.kubernetes.io/rewrite-target: /
namespace: kube-system
spec:
rules:
- host:
http:
paths:
- path: /
backend:
serviceName: tomcat-deployment-service
servicePort: 8080
When I get this Ingress from the output of kubectl get ingress -n kube-system, it has a public, external IP.
What's concerning is that basic-auth DOESN'T APPLY to that external IP; it's wide open! Nginx authentication only kicks in when I try to visit the nginx-ingress-controller's IP.
I have a lot of questions.
How do I made an ingress created from kubectl apply -f
ingress.yaml target a specific nginx-ingress-controller?
How do I keep this new ingress from having an external IP?
Why isn't nginx authentication kicking in?
What IP am I suppose to use (the nginx-ingress-controller or the
generated one?)
If I'm suppose to use the generated IP, what about the one from the controller?
I have been searching for descent, working examples (and pouring over sparse, changing documentation, and github issues) for literally days.
EDIT:
In this "official" documentation, it's unclear as to weather or not http://10.2.29.4/ is the IP from the ingress or the controller. I assume the controller because when I run this, the other doesn't even authenticate (it let's me in without asking for a password). Both IP's I'm using are external IPs (publicly available) on GCP.
I think you might have some concept definition misunderstanding.
Ingress is not a job ( Nor a service, nor a pod ). It is just a configuration. It cannot have "IP". think of ingress as a routing rule or a routing table in your cluster.
Nginx-ingress-controller is the service with type Loadbalancer with actual running pods behind it that facilitates those ingress rules that you created for your cluster.
Nginx-ingress-backend is likely to be a default-backend that your nginx-ingress-controller will route to if no matching routes are found. see this
In general, your nginx-ingress-controller should be the only entry of your cluster. Other services in your cluster should have type ClusterIP such that they are not exposed to outside the cluster and only accessible through your nginx-ingress-controller. In you case, since your service could be access from outside directly, it should not be of type ClusterIP. Just change the service type to get it protected.
Based on above understanding, I will glad to provide further help for the question you have.
Some readings:
What is ingress: https://kubernetes.io/docs/concepts/services-networking/ingress/
K8s Services and external accessibility: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types

Expose port 80 on Digital Ocean's managed Kubernetes without a load balancer

I would like to expose my Kubernetes Managed Digital Ocean (single node) cluster's service on port 80 without the use of Digital Ocean's load balancer. Is this possible? How would I do this?
This is essentially a hobby project (I am beginning with Kubernetes) and just want to keep the cost very low.
You can deploy an Ingress configured to use the host network and port 80/443.
DO's firewall for your cluster doesn't have 80/443 inbound open by default.
If you edit the auto-created firewall the rules will eventually reset themselves. The solution is to create a separate firewall also pointing at the same Kubernetes worker nodes:
$ doctl compute firewall create \
--inbound-rules="protocol:tcp,ports:80,address:0.0.0.0/0,address:::/0 protocol:tcp,ports:443,address:0.0.0.0/0,address:::/0" \
--tag-names=k8s:CLUSTER_UUID \
--name=k8s-extra-mycluster
(Get the CLUSTER_UUID value from the dashboard or the ID column from doctl kubernetes cluster list)
Create the nginx ingress using the host network. I've included the helm chart config below, but you could do it via the direct install process too.
EDIT: The Helm chart in the above link has been DEPRECATED, Therefore the correct way of installing the chart would be(as per the new docs) is :
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
After this repo is added & updated
# For Helm 2
$ helm install stable/nginx-ingress --name=myingress -f myingress.values.yml
# For Helm 3
$ helm install myingress stable/nginx-ingress -f myingress.values.yml
#EDIT: The New way to install in helm 3
helm install myingress ingress-nginx/ingress-nginx -f myingress.values.yaml
myingress.values.yml for the chart:
---
controller:
kind: DaemonSet
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
daemonset:
useHostPort: true
service:
type: ClusterIP
rbac:
create: true
you should be able to access the cluster on :80 and :443 via any worker node IP and it'll route traffic to your ingress.
since node IPs can & do change, look at deploying external-dns to manage DNS entries to point to your worker nodes. Again, using the helm chart and assuming your DNS domain is hosted by DigitalOcean (though any supported DNS provider will work):
# For Helm 2
$ helm install --name=mydns -f mydns.values.yml stable/external-dns
# For Helm 3
$ helm install mydns stable/external-dns -f mydns.values.yml
mydns.values.yml for the chart:
---
provider: digitalocean
digitalocean:
# create the API token at https://cloud.digitalocean.com/account/api/tokens
# needs read + write
apiToken: "DIGITALOCEAN_API_TOKEN"
domainFilters:
# domains you want external-dns to be able to edit
- example.com
rbac:
create: true
create a Kubernetes Ingress resource to route requests to an existing Kubernetes service:
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: testing123-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: testing123.example.com # the domain you want associated
http:
paths:
- path: /
backend:
serviceName: testing123-service # existing service
servicePort: 8000 # existing service port
after a minute or so you should see the DNS records appear and be resolvable:
$ dig testing123.example.com # should return worker IP address
$ curl -v http://testing123.example.com # should send the request through the Ingress to your backend service
(Edit: editing the automatically created firewall rules eventually breaks, add a separate firewall instead).
A NodePort Service can do what you want. Something like this:
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
type: NodePort
selector:
app: MyApp
ports:
- protocol: TCP
nodePort: 80
targetPort: 80
This will redirect incoming traffic from port 80 of the node to port 80 of your pod. Publish the node IP in DNS and you're set.
In general exposing a service to the outside world like this is a very, very bad idea, because the single node passing through all traffic to the service is both going to receive unbalanced load and be a single point of failure. That consideration doesn't apply to a single-node cluster, though, so with the caveat that LoadBalancer and Ingress are the fault-tolerant ways to do what you're looking for, NodePort is best for this extremely specific case.