Assign public ip to the service which is running on Kubernetes - kubernetes

i have a application ( Let's say Integration-protocol-api)
and this application want to talk to other application, but this application located on another Network (Let's call it Another-Integration-Protocol)
And problem is, on another-integration-protocol side the whitelist exist, which allow to connect to it, only from selected ip addresses.
But my integration-protocol-api is Dockerized and running on Kubernetes cluster, so ip address is changing everyime when i restart my pod.
how can i assign the Public and Static ip to my Kubernetes Pod?

But my integration-protocol-api is Dockerized and running on Kubernetes cluster, so ip address is changing everyime when i restart my pod. how can i assign the Public and Static ip to my Kubernetes Pod
There are several approaches, depending on your actual setup/needs and I'll try to give some options here:
Tie pod to specific node and expose that node's IP address through service. This would be something along those lines:
# Quick deployment/pod manifest node selector (affinity is better)
...
spec:
nodeSelector:
kubernetes.io/hostname: my-node-name
...
# Service manifest
apiVersion: v1
kind: Service
metadata:
name: svc-myservice
labels:
app: myapp
tier: frontend
spec:
selector:
app: myapp
tier: frontend
ports:
- name: tcpserviceport
protocol: TCP
port: 8080
targetPort: 80
externalIPs:
- 111.222.222.111
Pod should be in same namespace, tied to that node via either node selector or affinity rules and have same labels as in selector for service to pick it up. IP address of cluster node with name my-node-name should be 111.222.222.111 in this example, and it would be accessible through port 8080 and that ip address.
If applicable, expose service through ingress and whitelist ingress public ip only. Depending on your namespace separation you'll reference your pod (wherever it might run) in ingress through corresponding service using either service name (in namespace scope) or FQDN such as:
<service-name>.<namespace-name>.svc.cluster.local
Here is good overview of some methods to make it more illustrative from kubernetes docs: https://kubernetes.io/docs/tutorials/kubernetes-basics/expose-intro/

A pod makes any request with it's node IP address as source. So you could whitelist your cluster nodes' IP addresses, and it should work.

Related

AKS Kubernetes questions

Can someone please explain how POD to POD works in AKS?
from the docs, I can see it uses kube proxy component to send the traffic to the desired POD.
But I have been told that I must use clusterIP service and bind all the relevant POD's together.
So what is real flow? Or I missed something. below a couple of questions to be more clear.
Questions:
how POD to POD inside one node can talk to each other? what is the flow?
how POD to POD inside a cluster (different nodes) can talk to each other? what is the flow?
if it's possible it will be highly appreciated if you can describe the flows for #1 and #2 in the deployment of kubenet and CNI.
Thanks a lot!
for pod to pod communication we use services. so first we need to understand,
why we need service: what actually do service for us that, they resolve the dns name and give us the the exact ip that we need to connect a specific pod. now as you want to communicate with pod to pod you need to create a ClusterIP service.
ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType. with ClusterIP service you can't access a pod from outside the cluster for this reason we use clusterip service if we want the communication between pod to pod only.
kube-proxy is the network proxy that runs on each node in your cluster.
it maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.
every service maintain iptables.And kube-proxy handled these ip tables for every service. so yes, kube-proxy is the most vital point for network setup in our k8s cluster.
how the network policy works in kubernetes:
all Pods can communicate with all other Pods without using network address translation (NAT).
all Nodes can communicate with all Pods without NAT.
the IP that a Pod sees itself as is the same IP that others see it as.
with those point:
Container-to-Container networking
Pod-to-Pod networking
Pod-to-Service networking
Internet-to-Service networking
It handles transmission of packets between pod to pods, and also with the outside world. It acts like a network proxy and load balancer for pods running on the node by implementing load-balancing using NAT in iptables.
The kube-proxy process stands in between the Kubernetes network and the pods that are running on that particular node. It is responsible for ensuring that communication is maintained efficiently across all elements of the cluster. When a user creates a Kubernetes service object, the kube-proxy instance is responsible to translate that object into meaningful rules in the local iptables rule set on the worker node. iptables is used to translate the virtual IP assigned to the service object to all of the pod IPs mapped by the service.
i hope it's clear your idea about kube proxy.
lets see a example how it's works.
here i used headless service so that i can connect a specific pod.
---
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: default
spec:
clusterIP: None
selector:
app: my-test
ports:
- port: 80
name: rest
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: my-sts
spec:
serviceName: my-service
replicas: 3
selector:
matchLabels:
app: my-test
template:
metadata:
labels:
app: my-test
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: web
---
this will create 3 pods. as : my-sts-0, my-sts-1, my-sts-2. now if we want to connect to the pod my-sts-0 just use this dns name my-sts-0.my-service.default.svc:80 . and the service will resolve the dns name and will provide the exact podip of my-sts-0. now if you need to comminucate from my-sts-1 to my-sts-0, you can just use this dns name.
The template is like my_pod_name.my_Service_Name.my_Namespace.svc.cluster-domain.example , but you can skip the cluster-domain.example part. Only Service_Name.Namespace.svc will work fine.
ref

Kubernetes / Metallb single entrypoint

I'm building a K8 cluster for a school project.
It's bare metal and uses metallb as a loadbalancer.
Each service works in a separate pod:
Nginx
Wordpress
Phpmyadmin
Mysql (mariadb)
In the phpmyadmin file, I need to link my mysql server with something like this:
$cfg['Servers'][$i]['host'] = "mysql-server-name";
I've tried to use the node's IP:
kubectl get node -o=custom-columns='DATA:status.addresses[0].address' | sed -n 2p
adding the port :3306 but I realised that none of my services could be reached through the browser with this method.
For instance the node's Ip:5050 should redirect me to my wordpress but it doesn't.
Is there any way to get a single IP that I can use to make my pods communicate between them ?
I must add that each service works appart when I use the svc IP instead of the nodes.
Here's the configmap I use for metallb:
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.99.100-192.168.99.200
The reason the node IP doesn't expose your application to other apps is that the pods in the kubernetes cluster don't listen to the requests coming to the node by default. In other words, the port on the pod is not connected to the port on the node.
The service resource is what you need to make that connection.
Services have different types. A service of type cluster IP will assign an IP internal to the cluster to the app. If you don't want to access your mysql database directly from the internet, this is what you would want.
Here is an example service of type cluster IP for your project.
apiVersion: v1
kind: Service
metadata:
name: mysql-service
namespace: metallb-system
spec:
selector:
app: Mysql
ports:
- protocol: TCP
port: 80
targetPort: 3306
Selector selects pods that carry the label app=mysql.
Port is the port that the service will listen to.
TargetPort is the port that mysql is listening to.
When you create the service you can find it's IP by running this command
kubectl get services -n metallb-system
Under CLUSTER-IP column note the IP of the service you created.
So in this case, if mysql is listening to 3306, you can reach it through this service on the service IP on port 80.
If you want to expose your wordpress app to the internet, use either the NodePort or LoadBalancer service types. Here is the reference for service types.

What’s the usage scenario of kubernetes headless service without selector?

I know a scenario of kubernetes headless service with selector.
But what’s the usage scenario of kubernetes headless service without selector?
Aliasing external services into the cluster DNS.
Services without selectors are used if you want to have an external database cluster in production, but in your test environment you use your own databases, to point your Service to a Service in a different Namespace or on another cluster, when you are migrating a workload to Kubernetes.
Service without selectors are often used to alias external services into the cluster DNS.
Here ia an example of service without selector:
apiVersion: v1
kind: Service
metadata:
name: example-service
spec:
ports:
- protocol: TCP
port: 80
targetPort: 9376
This Service has no selector, the corresponding Endpoint object is not created automatically. You can manually map the Service to the network address and port where it’s running, by adding an Endpoint object manually:
apiVersion: v1
kind: Endpoints
metadata:
name: example-service
subsets:
- addresses:
- ip: 192.0.2.42
ports:
- port: 9376
If you have more that one IP address for redundancy, you can repeat them in the addresses array. Once the endpoints are populated, the load balancer will start redirecting traffic from your Kubernetes service to the IP addresses,
Note: The endpoint IPs must not be: loopback (127.0.0.0/8 for IPv4, ::1/128 for IPv6), or link-local (169.254.0.0/16 and 224.0.0.0/24 for IPv4, fe80::/64 for IPv6). Endpoint IP addresses cannot be the cluster IPs of other Kubernetes Services, because kube-proxy doesn’t support virtual IPs as a destination.
You can access a Service without a selector the same as if it had a selector.
Take a look: services-without-selector, example-service-without-selector.

GKE LoadBalancer Static IP

I created a regional static IP in the same region of the cluster and I'm trying to use it with a LoadBalancer:
---
apiVersion: v1
kind: Service
metadata:
name: ambassador
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- port: 80
targetPort: 8080
selector:
service: ambassador
loadBalancerIP: "x.x.x.x"
However, I don't know why I am getting this error:
Error creating load balancer (will retry): failed to ensure load balancer for service default/ambassador: requested ip "x.x.x.x" is neither static nor assigned to the LB
Edit: Problem solved but ..
When I created the static IP address, I used:
gcloud compute addresses create regional-ip --region europe-west1
I used this address with the Service.
It didn't work like I said.
However, when I created an external static regional IP using the web console, the IP worked fine with my Service and it was attached without problems.
My bet is that the source IP service is not exposed then. As the official docs say:
As of Kubernetes 1.5, packets sent to Services with Type=LoadBalancer are source NAT’d by default, because all schedulable Kubernetes nodes in the Ready state are eligible for loadbalanced traffic. So if packets arrive at a node without an endpoint, the system proxies it to a node with an endpoint, replacing the source IP on the packet with the IP of the node (as described in the previous section).
Try this command to expose the source IP service to the loadbalancer:
kubectl expose deployment <source-ip-app> --name=loadbalancer --port=80 --target-port=8080 --type=LoadBalancer
On this page, you will find more guidance and a number of diagnostic commands for sanity check.
https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-loadbalancer

Access IP address of PODs via lookup on Ingress URL

I am learning Kubernetes and have deployed a headless service on Kubernetes(on AWS) which is exposed to the external world via nginx ingress.
I want nslookup <ingress_url> to directly return IP address of PODs.
How to achieve that?
Inside the cluster:
It's not a good idea to let a <ingress_host> resolved to Pod IP. It's a common design to let different kinds of pod served on one single hostname under different paths, but you can only set one (or one group of, with DNS load balance) IP record for it.
However, you can do this by adding <ingress_host> <Pod_IP> into /etc/hosts in init script, since you can get <Pod_IP> by doing nslookup <headless_service>.
HostAlias is another option if you konw the pod ip before applying the deployment.
From outside:
I don't think it's possible outside the cluster. Because you need to do the DNS lookup to get to the ingress controller first, which means it has to be resolved to the IP of ingress controller.
At last, it's a bad idea to use a headless service on Pod because many apps do DNS lookups once and cache the results, which might bring a problem because the IP of Pod can be "changed" frequently.
If you declare a “headless” service with selectors, then the internal DNS for the service will be configured to return the IP addresses of its pods directly. This is a somewhat unusual configuration and you should also expect an effect on other, cluster internal, users of that service.
This is documented here. Example:
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
clusterIP: None
selector:
app: MyApp
ports:
- name: http
protocol: TCP
port: 80
targetPort: 9376