I'm working on helm-charts and it should do the following:
Allow external requests over HTTPS only (to a specific service).
Allow internal requests over HTTP (to the same specific service).
I tried creating two ingress in which:
The first ingress is external and accepts only HTTPS requests (which is working as expected). I used the following anntations:
kubernetes.io/ingress.class: nginx
ingress.citrix.com/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
The second ingress is internal (with annotation kubernetes.io/ingress.class: "gce-internal"). This one is aimed to accept requests only from inside Minikube cluster. But it's is not working.
Running nslookup <host-name> from outside Minikube cluster returns ** server can't find <host-name>: NXDOMAIN.
Running nslookup <host-name> from inside MInikube cluster (minikube ssh) returns ;; connection timed out; no servers could be reached.
Running curl -kv "<host-name>" returns curl: (6) Could not resolve host: <host-name>.
Any ideas on how to fix this?
For internal communication ideally, you should be using the service name as the hostname.
However you can crate the internal ingress it is possible, but it will give you one single IP address. Using that IP address you can call the service.
Considering you are already running the ingress controller of Nginx and service.
You can edit the ingress service and inject or update this annotation to the service.
annotations:
cloud.google.com/load-balancer-type: "Internal"
this above line will create the Load Balancer service with the internal IP for ingress. You can create the ingress and resolve it using the curl.
https://stackoverflow.com/a/62559152/5525824
still with GCP if you are not getting the static IP address you can use this snippet
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.regional-static-ip-name: STATIC_IP_NAME
kubernetes.io/ingress.class: "gce-internal"
read more at : https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balance-ingress#static_ip_addressing
Related
i have a web application running inside cluster ip on worker node on port 5001,i'm also using k3s for cluster deployment, i checked the cluster connection it's running fine
the deployment has the container port set to 5001:
ports:
- containerPort:5001
Here is the service file:
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: user-ms
name: user-ms
spec:
ports:
- name: http
port: 80
targetPort: 5001
selector:
io.kompose.service: user-ms
status:
loadBalancer: {}
and here is the ingress file:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: user-ms-ingress
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: user-ms
port:
number: 80
i'm getting 502 Bad Gateway error whenever i type in my worker or master ip address
expected: it should return the web application page
i looked online and most of them mention wrong port for service and ingress, but my ports are correct yes i triple check it:
try calling user-ms service on port 80 from another pod -> worked try
calling cluster ip on worker node on port 5001 -> worked
the ports are running correct, why is the ingress returning 502?
here is the ingress describe:
and here is the describe of nginx ingress controller pod:
the nginx ingress pod running normally:
here is the logs of the nginx ingress pod:
sorry for the images, but i'm using a streaming machine to access the terminal so i can't copy paste
How should i go with debugging this error?
ok i managed to figure out this, in the default setting of K3S it uses traefik as it default ingress, so that why my nginx ingress log doesn't show anything from 502 Bad GateWay
I decided to tear down my cluster and set it up again, now with suggestion from this issue https://github.com/k3s-io/k3s/issues/1160#issuecomment-1058846505 to create cluster without traefik:
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable=traefik" sh -
now when i call kubectl get pods --all-namespaces i no longer see traefik pod running, previously it had traefik pods runining.
once i done all of it, run apply on ingress once again -> get 404 error, i checked in the nginx ingress pod logs now it's showing new error of missing Ingress class, i add the following to my ingress configuration file under metadata:
metadata:
name: user-ms-ingress
annotitations:
kubernetes.io/ingress.class: "nginx"
now i once more go to the ip of the worker node -> 404 error gone but got 502 bad gateway error, i checked the logs get connection refused errors:
i figured out that i was setting a network policy for all of my micro services, i delete the network policy and remove it's setting from all my deployment files.
Finally check once more and i can see that i can access my api and swagger page normally.
TLDR:
If you are using nginx ingress on K3S, remember to disable traefik first when created the cluster
don't forget to set ingress class inside your ingress configuration
don't setup network policy because nginx pod won't be able to call the other pods in that network
You can turn on access logging on nginx, which will enable you to see more logs on ingress-controller and also trace every requests routing through ingress, if you are trying to load UI/etc, it will show you that the requests are coming in from browser or if you accessing a particular endpoint, the calls will be visible on the nginx-controller logs. You can conclude, if the requests coming in are actually routing to the proper service using this and then start debugging the service (ex: check to see if you can curl the endpoint from any pod within the cluster etc)
Noticed that you are using the image(k8s.gcr.io/ingress-nginx/controller:v1.2.0), if you have installed using helm, there must be a kubernetes-ingress configmap with ingress controller, by default "disable-access-log" will be true, change it false and you should start seeing more logs on ingress-controller, you might want to bounce ingress controller pods if you do not see detailed logs.
Kubectl edit cm -n namespace kubernetes-ingress
apiVersion: v1
data:
disable-access-log: "false" #turn this to false
map-hash-bucket-size: "128"
ssl-protocols: SSLv2 SSLv3 TLSv1 TLSv1.1 TLSv1.2 TLSv1.3
kind: ConfigMap
I have a simple app which I am following from a basic tutorial :
https://medium.com/#bhargavshah2011/hello-world-on-kubernetes-cluster-6bec6f4b1bfd
I create a cluster hello-world2-cluster
I "connect" to the cluster using :
gcloud container clusters get-credentials hello-world2-cluster --zone us-central1-c --project strange-vortex-286312
I perform a git clone of the "hellow world" project, which is basically some kind of snake game :
$ git clone https://github.com/skynet86/hello-world-k8s.git
$ cd hello-world-k8s/
Next I run "create" on the YAML:
kubectl create -f hello-world.yaml
Deplyments and services are created as expected :
Fine! ALl good!
But.... now what????
What do I do now?
I want to access this app from the outside world. Where is the URL where I can access the application?
How can I get my friend Bob in Timbuktu to call some kind of URL (no need for DNS) to access my application?
I know the answer has something to do with LoadBalancer nodes and Ingress, but there seems to be no proper documentation out there on how to do this for a simple hello world app.
GKE hello-world with Ingress Official Example: https://cloud.google.com/kubernetes-engine/docs/how-to/load-balance-ingress
The ingress controller is built-in in GKE, you just need a kind: Ingress object that points to your Service. Addressing your question:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: hello-ingress
annotations:
kubernetes.io/ingress.class: "gce"
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: hello-world
servicePort: 80
Once deployed, query kubectl get ingress hello-ingress -oyaml to find out the resulting load-balancer's IP in the status: field.
If you don't want to set up a DNS and/or an ingress, you can simply do it with what you already have using the node external ip.
It seems that you already have the deployement and the NodePort service is mapped to port 30081.
So, below are the remaining steps to make it work with what you have :
Get your node IP :
kubectl get nodes -o wide
Now that you have your node external IP, you need to allow tcp traffic to this port
gcloud compute firewall-rules create allow-tcp-30081 --allow tcp:30081
That's it ! You can just tell you're friend Bob that the service should be accessible at node-ip:30081
You can have a look at this page to see the other possibilities depending on your service type.
Also, when you use a NodePort the port is mapped to all the nodes of your cluster, so you can use any IP. No need to look for the node where your replicas is deployed.
If you don't want to use Ingress or anything. You can expose your deployment as LoadBalancer then using external IP your friend bob can access application. So to use
kubectl expose deployment hello-world-deployment --name=hello-world-deployment --type=LoadBalancer --port 80 --target-port 80
This will take around 1 minute to assign an external IP address to the service.
to get external IP use
kubectl get services
Get the External IP address in-front of your service and open your browser and enter the IP address that you copied.
Done !! This is the easiest and simplest way to get start with !!
You can expose your Kubernetes services to the internet with an Ingress.
GKE allows you to use different kinds of Ingress (nginx based, and so on) but, by default, it will expose your service through the Cloud Load Balancing service.
As indicated by #MaxLobur, you can configure this default Ingress applying a configuration file similar to this:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: hello-world-ingress
spec:
backend:
serviceName: hello-world
servicePort: 80
Or, equivalently, if you want to prepare your Ingress for various backend services and configure it based on rules:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: hello-world-ingress
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: hello-world
servicePort: 80
This Ingress will be assigned an external IP address.
You can find which with the following command:
kubectl get ingress hello-world-ingress
This command will output something similar to the following:
NAME HOSTS ADDRESS PORTS AGE
hello-world-ingress * 203.0.113.12 80 2m
But, if you give this IP address to your friend Bob in Timbuktu, he will call you back soon telling you that he cannot access the application...
Now seriously: this external IP address assigned by GCP is temporary, an ephemeral one subject to change.
If you need a deterministic, fixed IP address for your application, you need to configure a static IP for your Ingress.
This will also allow you to configure, if necessary, a DNS record and SSL certificates for your application.
In GCP, the first step in this process is reserving a static IP address. You can configure it in the GCP console, or by issuing the following gcloud command:
gcloud compute addresses create hello-world-static-ip --global
This IP should be later assigned to your Ingress by modifying its configuration file:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: hello-world-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "hello-world-static-ip"
spec:
backend:
serviceName: hello-world
servicePort: 80
This static IP approach only adds value because as long as it is associated with a Google Cloud network resource, you will not be charged for it.
All this process is documented in this GCP documentation page.
I have been using the Google Cloud Load Balancer ingress. However, I'm trying to install a nginxinc/kubernetes-ingress controller in a node with a Static IP address in GKE.
Can I use Google's Cloud Load Balancer ingress controller in the same cluster?
How can we use the nginxinc/kubernetes-ingress with a static IP?
Thanks
In case you're using helm to deploy nginx-ingress.
First create a static IP address. In google the Network Loadbalancers (NLBs) only support regional static IPs:
gcloud compute addresses create my-static-ip-address --region us-east4
Then install nginx-helm with the ip address as a loadBalancerIP parameter
helm install --name nginx-ingress stable/nginx-ingress --namespace my-namespace --set controller.service.loadBalancerIP=35.186.172.1
First question
As Radek 'Goblin' Pieczonka already pointed you out it is possible to do so.
I just wanted to link you to the official documentation regarding this matter:
If you have multiple Ingress controllers in a single cluster, you can
pick one by specifying the ingress.class annotation, eg creating an
Ingress with an annotation like
metadata:
name: foo
annotations:
kubernetes.io/ingress.class: "gce"
will target the GCE controller, forcing the nginx controller to ignore
it, while an annotation like
metadata:
name: foo
annotations:
kubernetes.io/ingress.class: "nginx"
Second question
Since you are making use of the Google Cloud Platform I can give you further details regarding this implementation of Kubernetes in Google.
Consider that:
By default, Kubernetes Engine allocates ephemeral external IP
addresses for HTTP applications exposed through an Ingress.
However of course you can use static IP addressed for your ingress resource,
there is an official step to step guide showing you how to create a HTTP Load Balancing with Ingress making use of a ingress resource and to link to it a static IP or how to promote an "ephemeral" already in use IP to be static.
Try to go through it and if you face some issue update the question and ask!
For the nginx-ingress controller you have to set the external IP on the service:
spec:
loadBalancerIP: "42.42.42.42"
externalTrafficPolicy: "Local"
It is perfectly fine to run multiple ingress controllers inside kubernetes, but they need to be aware which Ingress objects they are supposed to instantiate. That is done with a special annotation like :
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
which tells that this ingress is expected to be provided by and only by nginx ingress controller.
As for IP, Some cloud providers allow the loadBalancerIP to be specified. with this you can controll the public IP of a service.
Create a Static Ip
gcloud compute addresses create my-ip --global
Describe the Static Ip (this will helo you to know static IP )
gcloud compute addresses describe ssl-ip --global
Now add these annotations:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: "gce" # <----
kubernetes.io/ingress.global-static-ip-name: my-ip # <----
Apply the ingress
kubectl apply -f infress.yaml
(Now wait for 2 minutes)
Run this to it will reflect the new ip
kubectl get ingress
I'm trying to whitelist an IP to access a deployment inside my Kubernetes cluster.
I looked for some documentation online about this, but I only found the
ingress.kubernetes.io/whitelist-source-range
for ingress to grant access to certain IP range. But still, I couldn't manage to isolate the deployment.
Here is the ingress configuration YAML file:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-internal
annotations:
kubernetes.io/ingress.class: "istio"
ingress.kubernetes.io/whitelist-source-range: "xxx.xx.xx.0/24, xx.xxx.xx.0/24"
spec:
rules:
- host: white.example.com
http:
paths:
- backend:
serviceName: white
servicePort: 80
I can access the deployment from my whitelisted IP and from the mobile phone (different IP not whitelisted in the config)
Has anyone stepped in the same problem using ingress and Istio?
Any help, hint, docs or alternative configuration will be much appreciated.
Have a look at the annotation overview, it seems that whitelist-source-range is not supported by istio:
whitelist-source-range: Comma-separate list of IP addresses to enable access to.
nginx, haproxy, trafficserver
I managed to solve whitelisting ip address problem for my istio-based service (app that uses istio proxy and exposed through the istio ingress gateway via public LB) using NetworkPolicy.
For my case, here is the topology:
Public Load Balancer (in GKE, using preserve clientIP mode) ==> A dedicated Istio Gateway Controller Pods (see my answer here) ==> My Pods (istio-proxy sidecar container, my main container).
So, I set up 2 network policy:
NetworkPolicy that guards the incoming connection from internet connection to my Istio Ingress Gateway Controller Pods. In my network policy configuration, I just have to set the spec.podSelector.matchLabels field to the pod label of Dedicated Istio Ingress Gateway Controller Pods's
Another NetworkPolicy that limits the incoming connection to my Deployment -> only from the Istio Ingress Gateway Controller pods/deployments.
My goal is to make my web application (deployed on Kubernetes 1.4 cluster) see the IP of the client that originally made the HTTP request. As I'm planning to run the application on a bare-metal cluster, GCE and the service.alpha.kubernetes.io/external-traffic: OnlyLocal service annotation introduced in 1.4 is not applicable for me.
Looking for alternatives, I've found this question which is proposing to set up an Ingress to achieve my goal. So, I've set up the Ingress and the NginX Ingress Controller. The deployment went smoothly and I was able to connect to my web app via the Ingress Address and port 80. However in the logs I still see cluster-internal IP (from 172.16.0.0/16) range - and that means that the external client IPs are not being properly passed via the Ingress. Could you please tell me what do I need to configure in addition to the above to make it work?
My Ingress' config:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myWebApp
spec:
backend:
serviceName: myWebApp
servicePort: 8080
As a layer 4 proxy, Nginx cannot retain the original source IP address in the actual IP packets. You can work around this using the Proxy protocol (the link points to the HAProxy documentation, but Nginx also supports it).
For this to work however, the upstream server (meaning the myWebApp service in your case) also needs to support this protocol. In case your upstream application also uses Nginx, you can enable proxy protocol support in your server configuration as documented in the official documentation.
According to the Nginx Ingress Controller's documentation, this feature can be enabled in the Ingress Controller using a Kubernetes ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-ingress-controller
data:
use-proxy-protocol: "true"
Specify the name of the ConfigMap in your Ingress controller manifest, by adding the --nginx-configmap=<insert-configmap-name> flag to the command-line arguments.