kubernetes v1.1 baremetal => how to connect ingress to outside world - kubernetes

I have a setup of kubernetes on a coreos baremetal.
For now I did the connection from outside world to service with a nginx reverse-proxy.
I'm trying the new Ingress resource.
for now I have added a simple ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kube-ui
spec:
backend:
serviceName: kube-ui
servicePort: 80
that starts like this:
INGRESS
NAME RULE BACKEND ADDRESS
kube-ui - kube-ui:80
My question is how to connect from the outside internet to that ingress point as this resource have no ADDRESS ... ?

POSTing this to the API server will have no effect if you have not configured an Ingress controller. You need to choose the ingress controller implementation that is the best fit for your cluster, or implement one. Examples and instructions can be found here.

check this gist
This is for the ingress-nginx, not kubernetes-ingress
Pre-requirement
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
Exposing hostNetwork (hope you know what you are doing. As documented, other than this, you can use nodePort or loadbalancer.)
kubectl edit deployment.apps/nginx-ingress-controller -n ingress-nginx
add
template:
spec:
hostNetwork: true
port forwarding
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
9000: "default/example-go:8080"
Also, you can use ingress object to expose the service

Related

Nginx ingress sends private IP for X-Real-IP to services

I have created a Nginx Ingress and Service with the following code:
apiVersion: v1
kind: Service
metadata:
name: myservice
spec:
type: ClusterIP
selector:
name: my-app
ports:
- port: 8000
targetPort: 8000
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myingress
annotations:
kubernetes.io/ingress.class: nginx
labels:
name: myingress
spec:
rules:
- host: mydomain.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: my-service
port:
number: 8000
Nginx ingress installed with:
helm install ingress-nginx ingress-nginx/ingress-nginx.
I have also enabled proxy protocols for ELB. But in nginx logs I don't see the real client ip for X-Forwarded-For and X-Real-IP headers. This is the final headers I see in my app logs:
X-Forwarded-For:[192.168.21.145] X-Forwarded-Port:[80] X-Forwarded-Proto:[http] X-Forwarded-Scheme:[http] X-Real-Ip:[192.168.21.145] X-Request-Id:[1bc14871ebc2bfbd9b2b6f31] X-Scheme:[http]
How do I get the real client ip instead of the ingress pod IP? Also is there a way to know which headers the ELB is sending to the ingress?
One solution is to use externalTrafficPolicy: Local (see documentation).
In fact, according to the kubernetes documentation:
Due to the implementation of this feature, the source IP seen in the target container is not the original source IP of the client.
...
service.spec.externalTrafficPolicy - denotes if this Service desires to route external traffic to node-local or cluster-wide endpoints. There are two available options: Cluster (default) and Local. Cluster obscures the client source IP and may cause a second hop to another node, but should have good overall load-spreading. Local preserves the client source IP and avoids a second hop for LoadBalancer and NodePort type services, but risks potentially imbalanced traffic spreading.
If you want to follow this route, update your nginx ingress controller Service and add the externalTrafficPolicy field:
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-controller
spec:
...
externalTrafficPolicy: Local
A possible alternative could be to use Proxy protocol (see documentation)
The proxy protocol should be enabled in the ConfigMap for the ingress-controller as well as the ELB.
L4 uses proxy-protocol
For L7 use use-forwarded-headers
# configmap.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
data:
use-forwarded-headers: "true"
use-proxy-protocol: "true"
https://kubernetes.github.io/ingress-nginx/user-guide/miscellaneous/#proxy-protocol
Just expanding on #strongjz answer.
By default, the Load balancer that will be created in AWS for a Service of type LoadBalancer will be a Classic Load Balancer, operating on Layer 4, i.e., proxying in the TCP protocol level.
For this scenario, the best way to preserve the real ip is to use the Proxy Protocol, because it is capable of doing this at the TCP level.
To do this, you should enable the Proxy Protocol both on the Load Balancer and on Nginx-ingress.
Those values should do it for a Helm installation of nginx-ingress:
controller:
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
config:
use-proxy-protocol: "true"
real-ip-header: "proxy_protocol"
The service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*" annotation will tell the aws-load-balancer-controller to create your LoadBalancer with Proxy Protocol enabled. I'm not sure what happens if you add it to a pre-existing Ingress-nginx, but it should work too.
The use-proxy-protocol and real-ip-header are options passed to Nginx, to also enable Proxy Protocol there.
Reference:
https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/service/annotations/#proxy-protocol-v2
https://kubernetes.github.io/ingress-nginx/user-guide/miscellaneous/#proxy-protocol
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#use-proxy-protocol

how to use ALB Ingress with api networking.k8s.io/v1 in EKS

Previously I was using the extensions/v1beta1 api to create ALB on Amazon EKS. After upgrading the EKS to v1.19 I started getting warnings:
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
So I started to update my ingress configuration accordingly and deployed the ALB but the ALB is not launching in AWS and also not getting the ALB address.
Ingress configuration -->
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: "pub-dev-alb"
namespace: "dev-env"
annotations:
kubernetes.io/ingress.class: "alb"
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
spec:
rules:
- host: "dev.test.net"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: "dev-test-tg"
port:
number: 80
Node port configuration -->
apiVersion: v1
kind: Service
metadata:
name: "dev-test-tg"
namespace: "dev-env"
spec:
ports:
- port: 80
targetPort: 3001
protocol: TCP
type: NodePort
selector:
app: "dev-test-server"
Results --->
Used this documentation to create ALB ingress controller.
Could anyone help me on here?
Your ingress should work fine even if you use newest Ingress. The warnings you see indicate that a new version of the API is available. You don't have to worry about it.
Here is the explanation why this warning occurs, even if you you use apiVersion: networking.k8s.io/v1:
This is working as expected. When you create an ingress object, it can be read via any version (the server handles converting into the requested version). kubectl get ingress is an ambiguous request, since it does not indicate what version is desired to be read.
When an ambiguous request is made, kubectl searches the discovery docs returned by the server to find the first group/version that contains the specified resource.
For compatibility reasons, extensions/v1beta1 has historically been preferred over all other api versions. Now that ingress is the only resource remaining in that group, and is deprecated and has a GA replacement, 1.20 will drop it in priority so that kubectl get ingress would read from networking.k8s.io/v1, but a 1.19 server will still follow the historical priority.
If you want to read a specific version, you can qualify the get request (like kubectl get ingresses.v1.networking.k8s.io ...) or can pass in a manifest file to request the same version specified in the file (kubectl get -f ing.yaml -o yaml)
You can also see a similar question.

Kong Ingress Controller has no effect on Kong Plugins

I have gone through kong-ingress-controller deployment and getting started doc and done everything mentioned.
Update User Permissions
Deploy Kong Ingress Controller
Setup environment variables
Created Ingress with Routes
Everything works fine, I can access my applications based on the routes. But when I added rate-limit plugins or any other plugins it does not take any effect.
ingress.yaml :
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: kong
plugins.konghq.com: http-ratelimit, http-auth
spec:
rules:
- host: foo.bar
http:
paths:
- path: /users
backend:
serviceName: my-service
servicePort: 80
rate-limit.yaml :
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
name: http-ratelimit
labels:
global: 'true'
config:
minute: 5
plugin: rate-limiting
But the rate limit plugin has no effect on my ingress.
NB: The kong-ingress-controller is in kong namespace but the other resources are in default namespace. I tried to move everything to kong namespace then the plugins works but service does not work as it is in default namespace.
Thanks in advance.
Looking at the Kong docs, the rate-limit YAML looks correct. If the resource is configured correctly, Kong is not matching the request against the ingress resource because the user is not sending the correct request.
KongPlugin, KongIngress should be in same namespace as Service. YAML provides looks correct.
There must be something wrong in ingress yamls annotation and configuration.Is your service annotated with Ingress object?
I think you need to add this annotation to your KongPlugin:
annotations:
kubernetes.io/ingress.class: kong
So try with
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
name: http-ratelimit
annotations:
kubernetes.io/ingress.class: kong
[...]
In my scenario, I wanted to apply the KongPlugin on a specific Ingress Resource/Route.
What worked for me was to create the KongPlugin object in the same namespace where the Ingress Resource (and therefore, the target service) lived.

How come GKE gives me different IPs for each ingress that I create?

I am using multiple ingresses resource on my GKE, say I have 2 ingress in different namespaces. I create the ingress resource as shown in the yaml below. With the annotations used in the below yaml, I clearly mention that I am using the GCE controller that comes with GKE(https://github.com/kubernetes/ingress-gce). But every time I create an ingress I get different IPs, For instance sometimes I get 133.133.133.133 and for the other times I get 133.133.133.134. And it alternates between only these two IPs (it's probably between only two IPs because of quotas limit). This is a problem when I just want to reserve one IP and load balance/terminate multiple apps on this IP only.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: gce
name: http-ingress
spec:
backend:
serviceName: http-svc
servicePort: 80
In your Ingress resource you can specify you need the Load Balancer to use a specific IP address with the kubernetes.io/ingress.global-static-ip-name annotation like so:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: gce
kubernetes.io/ingress.global-static-ip-name: static-ip-name
name: http-ingress
spec:
backend:
serviceName: http-svc
servicePort: 80
You will need to create a global static IP first using the gcloud tool. See step 2(b) here: https://cloud.google.com/kubernetes-engine/docs/tutorials/configuring-domain-name-static-ip.

GKE Ingress Basic Authentication (ingress.kubernetes.io/auth-type)

I'm trying to get a GKE ingress to require basic auth like this example from github.
The ingress works fine. It routes to the service. But the authentication isn't working. Allows all traffic right through. Has GKE not rolled this feature out yet? Something obviously wrong in my specs?
Here's the ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: super-ingress
annotations:
ingress.kubernetes.io/auth-type: basic
ingress.kubernetes.io/auth-secret: basic-auth
ingress.kubernetes.io/auth-realm: "Authentication Required"
spec:
rules:
- host: zzz.host.com
http:
paths:
- backend:
serviceName: super-service
servicePort: 9000
path: /*
And the basic-auth secret:
$ kubectl get secret/basic-auth -o yaml
apiVersion: v1
data:
auth: XXXXXXXXXXXXXXXXXXX
kind: Secret
metadata:
creationTimestamp: 2016-10-03T21:21:52Z
name: basic-auth
namespace: default
resourceVersion: "XXXXX"
selfLink: /api/v1/namespaces/default/secrets/basic-auth
uid: XXXXXXXXXXX
type: Opaque
Any insights are greatly appreciated!
The example you linked to is for nginx ingress controller. GKE uses GLBC, which doesn't support auth.
You can deploy an nginx ingress controller in your gke cluster. Note that you need to annotate your ingress to avoid the GLBC claiming the ingress. Then you can expose the nginx controller directly, or create a glbc ingress to redirect traffic to the nginx ingress (see this snippet written by bprashanh).