I have a working Nexus 3 pod, reachable on port 30080 (with NodePort): http://nexus.mydomain:30080/ works perfectly from all hosts (from the cluster or outside).
Now I'm trying to make it accessible at the port 80 (for obvious reasons).
Following the docs, I've implemented it like that (trivial):
[...]
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: nexus-ingress
namespace: nexus-ns
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: nexus.mydomain
http:
paths:
- path: /
pathType: Prefix
backend:
serviceName: nexus-service
servicePort: 80
Applying it works without errors. But when I try to reach http://nexus.mydomain, I get:
Service Unavailable
No logs are shown (the webapp is not hit).
What did I miss ?
K3s Lightweight Kubernetes
K3s is designed to be a single binary of less than 40MB that completely implements the Kubernetes API. In order to achieve this, they removed a lot of extra drivers that didn't need to be part of the core and are easily replaced with add-ons.
As I mentioned in comments, K3s as default is using Traefik Ingress Controller.
Traefik is an open-source Edge Router that makes publishing your services a fun and easy experience. It receives requests on behalf of your system and finds out which components are responsible for handling them.
This information can be found in K3s Rancher Documentation.
Traefik is deployed by default when starting the server... To prevent k3s from using or overwriting the modified version, deploy k3s with --no-deploy traefik and store the modified copy in the k3s/server/manifests directory. For more information, refer to the official Traefik for Helm Configuration Parameters.
To disable it, start each server with the --disable traefik option.
If you want to deploy Nginx Ingress controller, you can check guide How to use NGINX ingress controller in K3s.
As you are using specific Nginx Ingress like nginx.ingress.kubernetes.io/rewrite-target: /$1, you have to use Nginx Ingress.
If you would use more than 2 Ingress controllers you will need to force using nginx ingress by annotation.
annotations:
kubernetes.io/ingress.class: "nginx"
If mention information won't help, please provide more details like your Deployment, Service.
I do not think you can expose it on port 80 or 443 over a NodePort service or at least it is not recommended.
In this configuration, the NGINX container remains isolated from the
host network. As a result, it can safely bind to any port, including
the standard HTTP ports 80 and 443. However, due to the container
namespace isolation, a client located outside the cluster network
(e.g. on the public internet) is not able to access Ingress hosts
directly on ports 80 and 443. Instead, the external client must append
the NodePort allocated to the ingress-nginx Service to HTTP requests.
-- Bare-metal considerations - NGINX Ingress Controller
* Emphasis added by me.
While it may sound tempting to reconfigure the NodePort range using
the --service-node-port-range API server flag to include unprivileged
ports and be able to expose ports 80 and 443, doing so may result in
unexpected issues including (but not limited to) the use of ports
otherwise reserved to system daemons and the necessity to grant
kube-proxy privileges it may otherwise not require.
This practice is therefore discouraged. See the other approaches
proposed in this page for alternatives.
-- Bare-metal considerations - NGINX Ingress Controller
I did a similar setup a couple of months ago. I installed a MetalLB load balancer and then exposed the service. Depending on your provider (e.g., GKE), a load balancer can even be automatically spun up. So possibly you don't even have to deal with MetalLB, although MetalLB is not hard to setup and works great.
Related
I have a website running inside a kubernetes cluster.
I can access it localy, but want to make it available over the internet. (I have a registered domain), but the external IP keeps pending
I worked with this instruction: https://dev.to/peterj/expose-a-kubernetes-service-on-your-own-custom-domain-52dd
This is the code for the service and ingress
kind: Service
apiVersion: v1
metadata:
name: app-service
spec:
selector:
app: website
ports:
- name: http
protocol: TCP
port: 3000
targetPort: 3000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: www.carina.bernrieder.de
http:
paths:
- path: /
backend:
serviceName: app-service
servicePort: 3000
So I'm using helm to install the nginx-controller, but after that Kubectl get all the external IP of the nginx controller keeps pending.
EXTERNAL-IP is expected to be pending in a non cloud environment such as minikube. You should be able to access the application using curl www.carina.bernrieder.de
Here is guide on using nginx ingress to expose an application on minikube
As #Arghya Sadhu mentioned, in local environment it is the expected behaviour. Maybe it will be easier to understand when you look a bit more deeply on how it works in cloud environments. Without going into details, if you apply an Ingress resource on GKE, EKS or AKS, a few more things happen "under the hood". A loadbalancer with an external IP is automatically created so your ingress can use it to forward external traffic to Pods deployed on your kubernetes cluster.
Minikube doesn't have such capabilities as it cannot make any call to any API for additional infrastructure resources to be created, as it happens on cloud environments.
But let's start from the beginning. You didn't mention in your question anything about your external IP or domain configuration. If you don't have an external static IP to which your domain has been redirected, it have no chances to work anyway.
As to this point, I won't fully agree:
You should be able to access the application using curl
www.carina.bernrieder.de
Yes, you will be able to access it via your domain (actually via any domain that you don't even need to own) provided you add the following entry in your /etc/hosts file so DNS won't be used and it will be resolved based on this locally defined mapping:
172.17.0.15 www.carina.bernrieder.de
As you can read here:
Note: If you are running Minikube locally, use minikube ip to get the
external IP. The IP address displayed within the ingress list will be
the internal IP.
But keep in mind that both those IPs will be private IPs. The one, that is displayed within the ingress list will be internal cluster ip and the external one will be extarnal only from your Minikube cluster perspective. It will be still the IP in your local network assigned to your Minikube vm.
And as you said in your question you want to make it available over the Internet. As you can see it has no chances to work without additional configuration.
Another important thing. You didn't mention where your Minikube is actually installed, so I guess you set it up on your local computer and most probably you're behind NAT router. If this is your case, it won't be so easy to expose it on a public internet. You will need to configure proper port forwarding rules on your router and of course you need a static IP or you need to configure dynamic DNS to be able to access your computer on the Internet via your dynami public IP.
Minikube was designed mainly for playing locally with kubernetes and not for production environments. Of course you can use it to run your small app, but then you may think about installing it on a VM in a cloud environment or some sort of VPS server.
I want to create a load balancer for 4 http server pods.
I have one mysql pod too.
Everything works fine, i have created a loadbalancer service for http, and another service for mysql.
I have read i should create an ingress too. But i do not understand what is an ingress because everything works with Services.
What is the value-add of an Ingress ?
Thanks
Since you have single service serving http, your current solution using LoadBalancer service type works fine. Imagine you have multiple http based services that you want to make externally available on different routes. You would have to create a LoadBalancer services for each of them and by default you would get a different IP address for each of them. Instead you can use an Ingress, which sits infront of these services and does the routing.
Example ingress manifest:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /cart
backend:
serviceName: cart
servicePort: 80
- path: /payment
backend:
serviceName: payment
servicePort: 80
Here you have two different HTTP services exposed by an Ingress on a single IP address. You don't need a LoadBalancer per service when using an Ingress.
The Service of type LoadBalancer relies on a third-party LoadBalancer and IP provisioning thing somewhere that deals with getting Layer 3 traffic (IP) from outside to the Nodes on some high-numbered NodePort.
A Ingress relies on a third-party Ingress Controller to accept Layer 3 traffic, open it up to Layer 7 (eg, terminate TLS) and do protocol-specific routing (eg by http fqdn/path) to some other Service (probably of type ClusterIP) inside the cluster.
If all your service should be explictly exposed without any further filtering or other options, a LoadBalancer and no Ingress might be the right choice....but LoadBalancers dont do much on their own....they just expose the Service to the outside world....very little in the way of traffic shpaing, A/B testing, etc
However, if you want to put multiple services behind a single IP/VIP/certificate, or you want to direct traffic in some weird ways (like based on Header:, client type, percentage weighting, etc), you'd probably want an Ingress (which itself would be exposed by a LoadBalancer Service)
I tested ingress in minikube successfully, no issue at all.
Then I deployed my app into ubuntu, if I am using service NodePort, it is also worked very well. After that, I was thinking to use Ingress as load balancer to router traffic, so that external url is no longer the ugly long port.
But unfortunately, I did not succeed, always failed.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: dv
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /test
backend:
serviceName: ngsc
servicePort: 3000
kubectl get ing
NAME HOSTS ADDRESS PORTS AGE
dv * 80 12s
root#kmaster:/home/ubuntu/datavisor# kubectl describe ing dv
Name: dv
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
*
/ ngsc:3000 (192.168.1.14:3000,192.168.1.17:3000,192.168.1.18:3000)
Annotations:
ingress.kubernetes.io/rewrite-target: /
Events: <none>
Then when I tried to access, I got following error:
curl http://cluster-ip
curl: (7) Failed to connect to <cluster-ip> port 80: Connection refused
What I really hope is to let the url exposed outside is http://ipaddress, instead of http://ipaddress:30080
I know that I can easily use nginx out of kubernete to meet this requirement, but that is not ideal, I want kubernete to handle it so that even service port changed, everything is still up.
Can you check above output and tell me what is the error? I checked a lot doc, every place seemed focus only in minikube, nothing related to real cluster deployment. Do I need to install anything to make ingress working? when I use kubectl get all --all-namespaces I did not see the ingress-controller installed at all. How can I install it if needed?
Thanks for your advice
Well, actually Kubernetes does not provide any Ingress controller out of box. You have to install Nginx Ingress or Traefik Ingress or anything else. Ingress controller must run somewhere in your cluster, it's a must. Actually ingress controller is the actual proxy that proxies traffic to your applications.
And I think you should know that minikube under the hood also uses nginx-ingress-controller (see https://github.com/kubernetes/minikube/tree/master/deploy/addons/ingress).
In a cloud environments ingress controllers run behind the cloud load balancer that performs load balancing between cluster nodes.
If you run on-prem cluster - then usually your ingress-controller is run as NodePort service and you may create DNS record pointing to your node IP addresses. It is also possible to run ingress controller on dedicated nodes and use hostNetwork: true. That will allow to use standard 80/443 ports. So there are many options here.
I had sticky session working in my dev environment with minibike with following configurations:
Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gl-ingress
annotations:
nginx.ingress.kubernetes.io/affinity: cookie
kubernetes.io/ingress.class: "gce"
kubernetes.io/ingress.global-static-ip-name: "projects/oceanic-isotope-199421/global/addresses/web-static-ip"
spec:
backend:
serviceName: gl-ui-service
servicePort: 80
rules:
- http:
paths:
- path: /api/*
backend:
serviceName: gl-api-service
servicePort: 8080
Service:
apiVersion: v1
kind: Service
metadata:
name: gl-api-service
labels:
app: gl-api
annotations:
ingress.kubernetes.io/affinity: 'cookie'
spec:
type: NodePort
ports:
- port: 8080
protocol: TCP
selector:
app: gl-api
Now that I have deployed my project to GKE sticky session no longer function. I believe the reason is that the Global Load Balancer configured in GKE does not have session affinity with the NGINX Ingress controller. Anyone have any luck wiring this up? Any help would be appreciated. I wanting to establish session affinity: Client Browser > Load Balancer > Ingress > Service. The actual session lives in the pods behind the service. Its an API Gateway (built with Zuul).
Session affinity is not available yet in the GCE/GKE Ingress controller.
In the meantime and as workaround, you can use the GCE API directly to create the HTTP load balancer. Note that you can't use Ingress at the same time in the same cluster.
Use NodePort for the Kubernetes Service. Set the value of the port in spec.ports[*].nodePort, otherwise a random one will be assigned
Disable kube-proxy SNAT load balancing
Create a Load Balancer from the GCE API, with cookie session affinity enabled. As backend use the port from 1.
Good news! Finally they have support for these kind of tweaks as beta features!
Beginning with GKE version 1.11.3-gke.18, you can use an Ingress to configure these properties of a backend service:
Timeout
Connection draining timeout
Session affinity
The configuration information for a backend service is held in a custom resource named BackendConfig, that you can "attach" to a Kubernetes Service.
Together with other sweet beta-features (like CDN, Armor, etc...) you can find how-to guides here:
https://cloud.google.com/kubernetes-engine/docs/how-to/configure-backend-service
Based on this: https://github.com/kubernetes/ingress-gce/blob/master/docs/annotations.md
there's no annotation available, which could effect the session affinity setting of the Google Cloud LoadBalancer (GCLB), that is created as a result of the ingress creation. As such:
This have to be turned on by hand: either as suggested above by creating the LB yourself, or letting the ingress controller do so and then changing the backend configuration for each backend (either via GUI or gcloud cli). IMHO the later seems faster and less prone to errors. (Tested, and cookie "GCLB" was returned by LB after the config change got propagated automatically, and subsequent requests including the cookie were routed to the same node)
As rightfully pointed out by Matt-y-er: service.spec "externalTrafficPolicy" has to be set to local "Local" to disable forwarding from the Node the GCLB selected to another. However:
One would still need to ensure:
The GCLB should not send traffic to nodes, which doesn't run the pod or
make sure there's a pod running on all nodes (and only a single pod as the externalTrafficPolicy setting would not prevent loadbalancing over multiple local pods)
With regard to #3,the simple solution:
convert the deployment to a daemonset -> there will be exactly one pod on each node (https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) at all times
The more complicated solution (but which allows to have less pods than nodes):
It seems, that GCLB's health check doesn't need to be adjusted as Ingress rule definition automatically sets up a healthcheck to the backend (and not to the default healthz service)
supply anti-affinity rules to make sure there's at most a single instance of a pod on each node (https://kubernetes.io/docs/concepts/configuration/assign-pod-node/)
Note: The above anti-affinity version was tested on 24th July 2018 with 1.10.4-gke.2 kubernetes version on a 2 node cluster running COS (default GKE VM image)
I was trying the gke tutorial for that on version: 1.11.6-gke.6 (the latest availiable).
stickiness was not there... the only option that was working was only after sessing externalTrafficPolicy":"Local" on the service...
spec:
type: NodePort
externalTrafficPolicy: Local
i opened defect to google about the same, and they accepted it, without commiting on eta.
https://issuetracker.google.com/issues/124064870
For the BackendConfig of the ingress loadbalancer, documentation can be found here:
https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features
An example snippet for type generated cookie is :
spec:
timeoutSec: 1800
connectionDraining:
drainingTimeoutSec: 1800
sessionAffinity:
affinityType: "GENERATED_COOKIE"
affinityCookieTtlSec: 1800
My goal is to make my web application (deployed on Kubernetes 1.4 cluster) see the IP of the client that originally made the HTTP request. As I'm planning to run the application on a bare-metal cluster, GCE and the service.alpha.kubernetes.io/external-traffic: OnlyLocal service annotation introduced in 1.4 is not applicable for me.
Looking for alternatives, I've found this question which is proposing to set up an Ingress to achieve my goal. So, I've set up the Ingress and the NginX Ingress Controller. The deployment went smoothly and I was able to connect to my web app via the Ingress Address and port 80. However in the logs I still see cluster-internal IP (from 172.16.0.0/16) range - and that means that the external client IPs are not being properly passed via the Ingress. Could you please tell me what do I need to configure in addition to the above to make it work?
My Ingress' config:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myWebApp
spec:
backend:
serviceName: myWebApp
servicePort: 8080
As a layer 4 proxy, Nginx cannot retain the original source IP address in the actual IP packets. You can work around this using the Proxy protocol (the link points to the HAProxy documentation, but Nginx also supports it).
For this to work however, the upstream server (meaning the myWebApp service in your case) also needs to support this protocol. In case your upstream application also uses Nginx, you can enable proxy protocol support in your server configuration as documented in the official documentation.
According to the Nginx Ingress Controller's documentation, this feature can be enabled in the Ingress Controller using a Kubernetes ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-ingress-controller
data:
use-proxy-protocol: "true"
Specify the name of the ConfigMap in your Ingress controller manifest, by adding the --nginx-configmap=<insert-configmap-name> flag to the command-line arguments.