GKE Internal Ingress for Headless Service - kubernetes

I'm trying to create an internal ingress for inter-cluster communication with gke. The service that I'm trying to expose is headless and points to a kafka-broker on the cluster.
However when I try to load up the ingress, it says it cannot find the service?
Warning Sync 3m22s (x17 over 7m57s) loadbalancer-controller Error syncing to GCP: error running load balancer syncing routine: loadbalancer coilwp7v-redpanda-test-abc123-redpanda-japm3lph does not exist: googleapi: Error 400: Invalid value for field 'resource.target': 'https://www.googleapis.com/compute/v1/projects/abc-123/regions/europe-west2/targetHttpProxies/k8s2-tp-coilwp7v-redpanda-test-abc123-redpanda-japm3lph'. A reserved and active subnetwork is required in the same region and VPC as the forwarding rule., invalid
Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: abc-redpanda
namespace: redpanda-test
annotations:
kubernetes.io/ingress.class: "gce-internal"
spec:
defaultBackend:
service:
name: redpanda-service
port:
number: 9092
Service:
apiVersion: v1
kind: Service
metadata:
name: redpanda-service
namespace: redpanda-test
annotations:
io.cilium/global-service: "true"
cloud.google.com/neg: '{"ingress": true}'
labels:
app: abc-panda
spec:
type: ExternalName
externalName: redpanda-cluster-0.redpanda-cluster.redpanda-test.svc.cluster.local
ports:
- port: 9092
targetPort: 9092

Setting up ingress for internal load balancing requires you to configure a proxy-only subnet on the same VPC used by your GKE cluster. This subnet will be used for the load balancers proxies. You'll also need to create a fw rule to allow traffic as well.
Have a look at the prereqs for ingress and then look here for info on how to setup the proxy-only subnet for your VPC.

Related

Kubernetes Ingress - how to access my service on my computer?

I have the following template, with a deployment, a service and an Ingress. I ran minikube addons enable ingress locally to add an ingress controller before.
apiVersion: apps/v1
kind: Deployment
metadata:
name: fastapi
labels:
app: fastapi
spec:
replicas: 1
selector:
matchLabels:
app: fastapi
template:
metadata:
labels:
app: fastapi
spec:
containers:
- name: fastapi
image: datamastery/fastapi
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: fastapi-service
spec:
selector:
app: fastapi
type: LoadBalancer
ports:
- protocol: TCP
port: 5000
targetPort: 3000
nodePort: 30002
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
namespace: kubernetes-dashboard
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: datamastery.com
http:
paths:
- path: /
pathType: Exact
backend:
service:
name: fastapi-service
port:
number: 3000
When I run kubectl get servicesI get:
fastapi-service LoadBalancer 10.108.5.228 <pending> 5000:30002/TCP 5d22h
I my etc/hosts/ file I added the following:
10.108.5.228 datamastery.com
Normally I would expect now to be able to open my service in the browser, but nothing happens. What did I do wrong? Did I miss something in the template? Is the IP wrong? Something in the hosts file?
Thank you!
fastapi-service LoadBalancer 10.108.5.228 5000:30002/TCP 5d22h
10.108.5.228 is an address within your SDN. Only members of your SDN can reach this address, it is unlikely your workstation would have a route sending this trafic to one of your Kubernetes nodes.
<pending> means your cluster is not integrated with a cloud provider with LoadBalancer capabilities. When in doubt: you should use ClusterIP as your service type. LoadBalancer only makes sense in specific cases. While setting a nodePort as you did is also not required (would make sense with a NodePort service, which is as well useful in few use cases, though should not be used otherwise).
You did create an Ingress. If you have an Ingress Controller, you want to connect to that ip/port. The Host header would tell your ingress controller where to route this, within your SDN.
I believe what you are doing here is trying to combine two different things.
NodePort is only sufficient if you have only one node OR you really control where your service pods getting deployed. Otherwise it is not suitable to use the node IP to access services.
To overcome this issue we usually use ingress as a service proxy. Incoming traffic will be routed to the correct service pods depending on the URL and port. Ingress also manages the SSL termination. So basically this is your first "load balancer" as ingress assigned traffic to services across nodes and pods.
In production environment you deploy the ingress controller with the type: Loadbalancer in the kube-system namespace, example for Nginx-ingress:
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
namespace: kube-system
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: LoadBalancer
ports:
- port: 80
name: http
targetPort: 80
- port: 443
name: https
targetPort: 443
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
This would spin up a cloud load balancer of your provider and link it to the ingress service in your cluster. So basically now you would have a real load balancer in place balancing traffic between your nodes and ingress routes them to your services and services to your pods.
Back to your question:
In your config files you try to spin up a service with the type: LoadBalancer. This would skip the ingress part and spin up a second cloud load balancer from your provider dedicated for this single service.
You have to remove the type (and nodePort) to use default ClusterIP for your service.
apiVersion: v1
kind: Service
metadata:
name: fastapi-service
spec:
selector:
app: fastapi
ports:
- protocol: TCP
port: 3000
targetPort: 3000
In addition you have mentioned a wrong port. Your ingress object points on port 3000. You Service object on port 5000. So we also change this.
With this config your traffic on the FQDN is routed to ingress, to ClusterIP service on port 3000 to your pods.

How to limit IP Addresses that have access to kubernetes service?

Is there any way to limit the access to Kubernetes Service of type LoadBalancer from outside the cluster?
I would like to expose my database's pod to the Internet using the LoadBalancer service that would be accessible only for my external IP address.
My Kubernetes cluster runs on GKE.
You can use loadBalancerSourceRanges to filter load balanced traffic as mentioned here.
Here is the simple example of Service in front of Nginx Ingress controllers:
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: external
app.kubernetes.io/name: ingress-nginx
name: external-ingress-nginx-controller
namespace: kube-ingress
spec:
loadBalancerSourceRanges:
- <YOUR_IP_1>
- <YOUR_IP_2>
- <YOUR_IP_3>
ports:
- name: https
nodePort: 32293
port: 443
targetPort: https
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: external
app.kubernetes.io/name: ingress-nginx
type: LoadBalancer
Yes, you can achieve that on Kubernetes level with a native Kubernetes Network Policy. There you can limit the Ingress traffic to your Kubernetes Service by specifying policies for the Ingress type.
An example could be:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
ports:
- protocol: TCP
port: 6379
More information can be found in the official documentation.
If you already want to block traffic from unwanted IP addresses on Load Balancer level, you have to define firewall rules and apply them on your GCP load balancer.
More information regarding the GCP firewall rules can also be found in the documentation.

Kubernetes Externalname service - how to connect

We have two clusters. cluster1 has namespace- test1 and a service running as clusterip
we have to call that service from another cluster(cluster2) from namespace dev1.
I have defined externalname service in cluster2 pointing to another externalname service in cluster1.
And externalname service in cluster1 points to the original service running as clusterip.
In cluster2:
kind: Service
apiVersion: v1
metadata:
name: service
namespace: dev1
labels:
app: service
spec:
selector:
app: service
type: ExternalName
sessionAffinity: None
externalName: service2.test.svc.cluster.local
status:
loadBalancer: {}
In cluster1:Externalname service
kind: Service
apiVersion: v1
metadata:
name: service2
namespace: test1
labels:
app: service
spec:
selector:
app: service
type: ExternalName
sessionAffinity: None
externalName: service1.test1.svc.cluster.local
status:
loadBalancer: {}
in cluster1 clusterip service:
kind: Service
apiVersion: v1
metadata:
name: service1
namespace: test1
labels:
app: service1
spec:
ports:
- name: http
protocol: TCP
port: 9099
targetPort: 9099
selector:
app: service1
clusterIP: 102.11.20.100
type: ClusterIP
sessionAffinity: None
status:
loadBalancer: {}
But, there is no hit to the service in cluster1. I tried to add spec:port:9099 in externalname services as well, still it does not work.
What could be the reason. Nothing specific in logs too
This is not what ExternalName services are for.
ExternalName services are used to have a cluster internal service name that forwards traffic to another (internal or external) DNS name. In practice what an ExternalName does is create a CNAME record that maps the external DNS name to a cluster-local name. It does not expose anything out of your cluster. See documenation.
What you need to do is expose your services outside of your kubernetes clusters and they will become usable from the other cluster as well.
There are different ways of doing this.
For example:
NodePort service: when using a NodePort, your service will be exposed on each node in the cluster on a random high port (by default in the 30000-32767 range). If your firewall allows traffic to such port you could reach your service from using that port.
LoadBalancer service: if you are running kubernetes in an environment that supports Load Balancer allocation you could expose your service to the internet using a load balancer.
Ingress: if you have an ingress controller running in your cluster you could expose your workload using an Ingress.
On the other cluster, you could simply reach the service exposed.

Nginx Ingress Failing to Serve

I am new to k8s
I have a deployment file that goes below
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins-deployment
spec:
replicas: 3
selector:
matchLabels:
component: web
template:
metadata:
labels:
component: web
spec:
containers:
- name: jenkins
image: jenkins
ports:
- containerPort: 8080
- containerPort: 50000
My Service File is as following:
apiVersion: v1
kind: Service
metadata:
name: jenkins-svc
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
name: http
selector:
component: web
My Ingress File is
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: jenkins-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: jenkins.xyz.com
http:
paths:
- path: /
backend:
serviceName: jenkins-svc
servicePort: 80
I am using the nginx ingress project and my cluster is created using kubeadm with 3 nodes
nginx ingress
I first ran the mandatory command
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
when I tried hitting jenkins.xyz.com it didn't work
when I tried the command
kubectl get ing
the ing resource doesnt get an IP address assigned to it
The ingress resource is nothing but the configuration of a reverse proxy (the Ingress controller).
It is normal that the Ingress doesn't get an IP address assigned.
What you need to do is connect to your ingress controller instance(s).
In order to do so, you need to understand how they're exposed in your cluster.
Considering the YAML you claim you used to get the ingress controller running, there is no sign of exposition to the outside network.
You need at least to define a Service to expose your controller (might be a load balancer if the provider where you put your cluster supports it), you can use HostNetwork: true or a NodePort.
To use the latest option (NodePort) you could apply this YAML:
https://github.com/kubernetes/ingress-nginx/blob/master/deploy/static/provider/baremetal/service-nodeport.yaml
I suggest you read the Ingress documentation page to get a clearer idea about how all this stuff works.
https://kubernetes.io/docs/concepts/services-networking/ingress/
In order to access you local Kubernetes Cluster PODs a NodePort needs to be created. The NodePort will publish your service in every node using using its public IP and a port. Then you can access the service using any of the cluster IPs and the assigned port.
Defining a NodePort in Kubernetes:
apiVersion: v1
kind: Service
metadata:
name: nginx-service-np
labels:
name: nginx-service-np
spec:
type: NodePort
ports:
- port: 8082 # Cluster IP, i.e. http://10.103.75.9:8082
targetPort: 8080 # Application port
nodePort: 30000 # (EXTERNAL-IP VirtualBox IPs) i.e. http://192.168.50.11:30000/ http://192.168.50.12:30000/ http://192.168.50.13:30000/
protocol: TCP
name: http
selector:
app: nginx
See a full example with source code at Building a Kubernetes Cluster with Vagrant and Ansible (without Minikube).
The nginx ingress controller can be replaced also with Istio if you want to benefit from a service mesh architecture for:
Load Balance traffic, external o internal
Control failures, retries, routing
Apply limits and monitor network traffic between services
Secure communication
See Installing Istio in Kubernetes under VirtualBox (without Minikube).

can i use ingress-nginx to simple route traffic?

I really like the kubernetes Ingress schematics. I currently run ingress-nginx controllers to route traffic into my kubernetes pods.
I would like to use this to also route traffic to 'normal' machines: ie vm's or physical nodes that are not part of my kubernetes infrastructure. Is this possible? How?
In Kubernetes you can define an externalName service in which you define a FQND to an external server.
kind: Service
apiVersion: v1
metadata:
name: my-service
namespace: prod
spec:
type: ExternalName
externalName: my.database.example.com
Then you can use my-service in your nginx rule.
You can create static service and corresponding endpoints for external services which are not k8s and then use k8s service in ingress to route traffic.
Also you see ingress doc to enable custom upstream check
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#custom-nginx-upstream-checks
In below example just change port/IP according to your need
apiVersion: v1
kind: Service
metadata:
labels:
product: external-service
name: external-service
spec:
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
---
apiVersion: v1
kind: Endpoints
metadata:
labels:
product: external-service
name: external-service
subsets:
- addresses:
- ip: x.x.x.x
- ip: x.x.x.x
- ip: x.x.x.x
ports:
- name: http
port: 80
protocol: TCP
I don't think it's possible, since ingress-nginx get pods info through watch namespace, service, endpoints, ingress resources, then redirect traffic to pods, without these resources specific to kubernetes, ingress-nginx has no way to find the ips that need loadbalance. And ingress-nginx doesn't has health-check method defined, it's up to the kubernetes builtin mechanic to check the health of the running pods.