I have Confluent Kafka 5.3.1 pods running with Zookeeper with SSL on AWS.
How can I make the Ingress work? I want to give access to an external user, to my Kafka topic. This should work as a Layer-4 ingress as it is on port 9092~3.
I am trying to find some documentation online.
Thank you
Ingress is only an L7 resource. So you can make use of a plain LoadBalancer Service. Since you are running on AWS, you can either use an ELB or an NLB. Make sure you use the right annotation on your Service, for example for an NLB:
metadata:
name: my-service
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
Related
I am learning Kubernetes and trying to deploy an app using MiniKube.
I have managed to expose the service mapped to nginx pod on Minikube IP. I can access the nginx service on url $(minikube ip):$(serviceport). which is fine, however I am looking to expose this to the public network. Currently this service is only accessible via my local machine, any other machine on my wifi network is not able to access it as it is exposed only on minikube ip. I dont want to forward the port in my local linux via IPtables, and I am looking for a built in solution to expose the port to world (and not just on minikube ip). I know it can be achieved as minikube dashboard by default expose the service on localhost, this implies that minikube can talk to other network adapters and can register the port, I am not sure how.
Here is my service yaml:
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
name: nginxservice
labels:
app: nginxservice
spec:
type: NodePort
ports:
- port: 80
name: http
targetPort: 80
nodePort: 32756
selector:
app: nginxcontainer
#subudear is right - you need Ingress.
An API object that manages external access to the services in a
cluster, typically HTTP. Ingress may provide load balancing, SSL
termination and name-based virtual hosting.
Ingress exposes HTTP and
HTTPS routes from outside the cluster to services within the cluster.
Traffic routing is controlled by rules defined on the Ingress
resource.
To be able use regularly use ingress(Im not talking about minikube right now) - it is not enough simply create Ingress object. You should first install related ingress controller.
There are lot of them, most popular are:
NGINX Ingress Controller
Kubernetes Nginx Ingress Controller
Traefik
Istio Ingress Controller
First 2 are very similar, but use absolutely different annotations. It often happens people confuse them
Talking about minikube:
As per guidelines, in order to install ingress the only you have to do is
minikube addons enable ingress
Please note that by default, minikube installing exactly NGINX Ingress controller
nginx-ingress-controller-5984b97644-rnkrg 1/1 Running 0 1m
You have to create ingress.
Follow the steps in this doc - https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/
Can someone please explain how POD to POD works in AKS?
from the docs, I can see it uses kube proxy component to send the traffic to the desired POD.
But I have been told that I must use clusterIP service and bind all the relevant POD's together.
So what is real flow? Or I missed something. below a couple of questions to be more clear.
Questions:
how POD to POD inside one node can talk to each other? what is the flow?
how POD to POD inside a cluster (different nodes) can talk to each other? what is the flow?
if it's possible it will be highly appreciated if you can describe the flows for #1 and #2 in the deployment of kubenet and CNI.
Thanks a lot!
for pod to pod communication we use services. so first we need to understand,
why we need service: what actually do service for us that, they resolve the dns name and give us the the exact ip that we need to connect a specific pod. now as you want to communicate with pod to pod you need to create a ClusterIP service.
ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType. with ClusterIP service you can't access a pod from outside the cluster for this reason we use clusterip service if we want the communication between pod to pod only.
kube-proxy is the network proxy that runs on each node in your cluster.
it maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.
every service maintain iptables.And kube-proxy handled these ip tables for every service. so yes, kube-proxy is the most vital point for network setup in our k8s cluster.
how the network policy works in kubernetes:
all Pods can communicate with all other Pods without using network address translation (NAT).
all Nodes can communicate with all Pods without NAT.
the IP that a Pod sees itself as is the same IP that others see it as.
with those point:
Container-to-Container networking
Pod-to-Pod networking
Pod-to-Service networking
Internet-to-Service networking
It handles transmission of packets between pod to pods, and also with the outside world. It acts like a network proxy and load balancer for pods running on the node by implementing load-balancing using NAT in iptables.
The kube-proxy process stands in between the Kubernetes network and the pods that are running on that particular node. It is responsible for ensuring that communication is maintained efficiently across all elements of the cluster. When a user creates a Kubernetes service object, the kube-proxy instance is responsible to translate that object into meaningful rules in the local iptables rule set on the worker node. iptables is used to translate the virtual IP assigned to the service object to all of the pod IPs mapped by the service.
i hope it's clear your idea about kube proxy.
lets see a example how it's works.
here i used headless service so that i can connect a specific pod.
---
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: default
spec:
clusterIP: None
selector:
app: my-test
ports:
- port: 80
name: rest
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: my-sts
spec:
serviceName: my-service
replicas: 3
selector:
matchLabels:
app: my-test
template:
metadata:
labels:
app: my-test
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: web
---
this will create 3 pods. as : my-sts-0, my-sts-1, my-sts-2. now if we want to connect to the pod my-sts-0 just use this dns name my-sts-0.my-service.default.svc:80 . and the service will resolve the dns name and will provide the exact podip of my-sts-0. now if you need to comminucate from my-sts-1 to my-sts-0, you can just use this dns name.
The template is like my_pod_name.my_Service_Name.my_Namespace.svc.cluster-domain.example , but you can skip the cluster-domain.example part. Only Service_Name.Namespace.svc will work fine.
ref
I've gotten up Traefik as an Ingress in Kubernetes with this configuration: https://github.com/RedxLus/traefik-simple-kubernetes/tree/master/V1.7
And works well to HTTP and HTTPS but I don't know how can open others ports to forward, for example, a Pod with an Ingress with MySQL in port 3306
Thanks for every answer!
Traefik doesn't support it if you are using an Ingress resource and that resource doesn't support L4 type of traffic like mentioned in the other answer.
But if you are using an Nginx ingress controller there is a workaround, use a ConfigMap with the ingress controller options --tcp-services-configmap and --udp-services-configmap as described here. Then your tcp-services ConfigMap would look something like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
9000: "default/example-go:8080"
The advantage of this is having a single entry point to your cluster (this applies to any ingress that would be used for TCP/UDP) but the downside is overhead of having an extra layer compared to just simply having a Kubernetes Service (NodePort or LoadBalancer) that already listens on TCP/UDP ports.
Kubernetes Ingress API does not support it. But it is possible to use Traefik as TCP proxy for your desired use-case, but only, if you make use of TLS encrypted connections. Otherwise, based on the level 4 protocol, it's not possible to distinguish between the different hostnames and you would have to use one entrypoint per TCP router. Check this issue in GitHub.
I've created a private GKE cluster with Istio through the Cloud Console UI. The cluster is set up with VPC Peering to be able to reach another private GKE cluster in another Google Cloud Project.
I've created a Deployment (called website) with a Service in Kubernetes in the staging namespace. My goal is to expose this service to the outside world with Istio, using the Envoy proxy. I've created the necessary VirtualService and Gateway to do so, following this guide.
When running "kubectl exec ..." to access a pod in the private cluster, I can successfully connect to the internal IP address of the website service, and see the output of that service with "curl".
I have set up a NAT Gateway so pods in the private cluster can connect to the Internet. I confirmed this by curl-ing various non-Google web pages from within the website pod.
However, I can't connect to the website service from the outside, using the External IP of the istio-ingressgateway service, as the guide above mentions. Instead, curl-ing that External IP leads to a timeout.
I've put the full YAML config for all related resources in a private Gist, here: https://gist.github.com/marceldegraaf/0f36ca817a8dba45ac97bf6b310ca282
I'm wondering if I'm missing something in my config here, or if my use case is actually impossible?
Looking at your Gist I suspect the problem lies in the joining up of the Gateway to the istio-ingressgateway.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: website-gateway
namespace: staging
labels:
version: v1
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
In particular I'm not convinced the selector part is correct.
You should be able to do something like
kubectl describe po -n istio-system istio-ingressgateway-rrrrrr-pppp
to find out what the selector is trying to match in the Istio Ingress Gateway pod.
I had the same problem. On my case, the istio virtual service dont find my service.
Try this on your VirtualService:
route:
- destination:
host: website
port:
number: 80
From verifying all options, the only way to have the private GKE cluster with Istio to be exposed to externally is to use Cloud NAT.
Since the Master node within GKE is a managed service, there are current limits when using Istio with a private cluster. The only workaround that would accomplish your use case is to use Cloud NAT. I have also attached an article on how to get started using Cloud NAT here.
My goal is to make my web application (deployed on Kubernetes 1.4 cluster) see the IP of the client that originally made the HTTP request. As I'm planning to run the application on a bare-metal cluster, GCE and the service.alpha.kubernetes.io/external-traffic: OnlyLocal service annotation introduced in 1.4 is not applicable for me.
Looking for alternatives, I've found this question which is proposing to set up an Ingress to achieve my goal. So, I've set up the Ingress and the NginX Ingress Controller. The deployment went smoothly and I was able to connect to my web app via the Ingress Address and port 80. However in the logs I still see cluster-internal IP (from 172.16.0.0/16) range - and that means that the external client IPs are not being properly passed via the Ingress. Could you please tell me what do I need to configure in addition to the above to make it work?
My Ingress' config:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myWebApp
spec:
backend:
serviceName: myWebApp
servicePort: 8080
As a layer 4 proxy, Nginx cannot retain the original source IP address in the actual IP packets. You can work around this using the Proxy protocol (the link points to the HAProxy documentation, but Nginx also supports it).
For this to work however, the upstream server (meaning the myWebApp service in your case) also needs to support this protocol. In case your upstream application also uses Nginx, you can enable proxy protocol support in your server configuration as documented in the official documentation.
According to the Nginx Ingress Controller's documentation, this feature can be enabled in the Ingress Controller using a Kubernetes ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-ingress-controller
data:
use-proxy-protocol: "true"
Specify the name of the ConfigMap in your Ingress controller manifest, by adding the --nginx-configmap=<insert-configmap-name> flag to the command-line arguments.