I would like to create service A (redis instance) and service B (application).
Application would like to use service A (redis).
How can I get some automaticaly address/url of service A inside k8s cluster without expose to internet?
Something like:
redis://service-a-url:6379
I don't know which technic of k8s should I use.
So for example your redis service should look like this:
apiVersion: v1
kind: Service
metadata:
name: redis
labels:
run: redis
spec:
ports:
- port: 6379
targetPort: 6379
protocol: TCP
selector:
run: redis
The service is type ClusterIP (because if you will not specify service type in yaml file by default it will be ClusterIP type) that you don't have to access service from the outside the cluster. There are more types of service - find information here: services-kubernetes.
Take a look: connecting-app-service, app-service-redis.
Kubernetes supports two modes of finding a Service - environment variables and DNS.
Kubernetes has a specific DNS cluster addon Service that automatically assigns DNS names to other Services.
Every single Service created in the cluster has its own assigned DNS name. A client Pod's DNS search list will include the Pod's own namespace and the cluster's default domain by default. This is best illustrated by example:
Assume a Service named example in the Kubernetes namespace ns. A Pod running in namespace ns can look up this service by simply doing a DNS query for example. A Pod running in namespace test can look up this service by doing a DNS query for example.ns.
Find more here: Kubernetes DNS-Based Service Discovery, dns-name-service.
You will be able to access your service within the cluster using following record:
<service>.<ns>.svc.<zone>. <ttl>
For example: redis.default.svc.cluster.local
Related
I am launching a service in a Kubernetes pod that I would like to be available to servers on the same subnet only.
I have created a service with a LoadBalancer opening the desired ports. I can connect to these ports through other pods on the cluster, but I cannot connect from virtual machines I have running on the same subnet.
So far my best solution has been to assign a loadBalancerIP and restrict it with loadBalancerSourceRanges, however this still feels too public.
The virtual machines I am attempting to connect to my service are ephemeral, and have a wide range of public IPs assigned, so my loadBalancerSourceRanges feels too broad.
My understanding was that I could connect to the internal LoadBalancer cluster-ip from servers that were on that same subnet, however this does not seem to be the case.
Is there another solution to limit this service to connections from internal IPs that I am missing?
This is all running on GKE.
Any help would be really appreciated.
i think you are right here a little bit but not sure why you mentioned the cluster-ip
My understanding was that I could connect to the internal LoadBalancer
cluster-ip from servers that were on that same subnet, however this
does not seem to be the case.
Now if you have deployment running on GKE and you have exposed it with service type LoadBalancer and have internal LB you will be able to access to internal LB across same VPC.
apiVersion: v1
kind: Service
metadata:
name: internal-svc
annotations:
networking.gke.io/load-balancer-type: "Internal"
spec:
type: LoadBalancer
externalTrafficPolicy: Cluster
selector:
app: internal-svcinternal-svc
ports:
- name: tcp-port
protocol: TCP
port: 8080
targetPort: 8080
once your changes are applied check the status using
kubectl get service internal-svc --output yaml
In YAML output check at last section for
status:
loadBalancer:
ingress:
- ip: 10.127.40.241
that's your actual IP you can use to connect with service from other VMs in subnet.
Doc ref
To restrict the service to only be available to servers on the same subnet, you can use a combination of Network Policies and Service Accounts.
First, you'll need to create a Network Policy which specifies the source IP range that is allowed to access your service. To do this, you'll need to create a YAML file which contains the following:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-same-subnet-traffic
spec:
podSelector: {}
ingress:
- from:
- ipBlock:
cidr: <subnet-range-cidr>
ports:
- protocol: TCP
port: <port-number>
Replace the <subnet-range-cidr> and <port-number> placeholders with the relevant IP address range and port numbers. Once this YAML file is created, you can apply it to the cluster with the following command:
kubectl apply -f path-to-yaml-file
Next, you'll need a Service Account and assign it to the service. you can use the Service Account to authenticate incoming requests. To do this, you'll need to add a Service Account to the service's metadata with the following command:
kubectl edit service <service-name>
You must first create a Role or ClusterRole and grant it access to the network policy before you can assign a Service Account to it. The network policy will then be applied to the Service Account when you bind the Role or ClusterRole to it. This can be accomplished using the Kubernetes kubectl command line tool as follows:
kubectl create role <role_name> --verb=get --resource=networkpolicies
kubectl create clusterrole <clusterrole_name> --verb=get --resource=networkpolicies
kubectl create rolebinding <rolebinding_name> --role=<role_name> --serviceaccount=<service_account_name>
kubectl create clusterrolebinding <clusterrolebinding_name> --clusterrole=<clusterrole_name> --serviceaccount=<service_account_name>
The network policy will be applied to all pods that make use of the Service Account when the Role or ClusterRole is bound to it. To access the service, incoming requests will need to authenticate with the Service Account once it has been added. The service will only be accessible to authorized requests as a result of this.
For more info follow this documentation.
Can someone please explain how POD to POD works in AKS?
from the docs, I can see it uses kube proxy component to send the traffic to the desired POD.
But I have been told that I must use clusterIP service and bind all the relevant POD's together.
So what is real flow? Or I missed something. below a couple of questions to be more clear.
Questions:
how POD to POD inside one node can talk to each other? what is the flow?
how POD to POD inside a cluster (different nodes) can talk to each other? what is the flow?
if it's possible it will be highly appreciated if you can describe the flows for #1 and #2 in the deployment of kubenet and CNI.
Thanks a lot!
for pod to pod communication we use services. so first we need to understand,
why we need service: what actually do service for us that, they resolve the dns name and give us the the exact ip that we need to connect a specific pod. now as you want to communicate with pod to pod you need to create a ClusterIP service.
ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType. with ClusterIP service you can't access a pod from outside the cluster for this reason we use clusterip service if we want the communication between pod to pod only.
kube-proxy is the network proxy that runs on each node in your cluster.
it maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.
every service maintain iptables.And kube-proxy handled these ip tables for every service. so yes, kube-proxy is the most vital point for network setup in our k8s cluster.
how the network policy works in kubernetes:
all Pods can communicate with all other Pods without using network address translation (NAT).
all Nodes can communicate with all Pods without NAT.
the IP that a Pod sees itself as is the same IP that others see it as.
with those point:
Container-to-Container networking
Pod-to-Pod networking
Pod-to-Service networking
Internet-to-Service networking
It handles transmission of packets between pod to pods, and also with the outside world. It acts like a network proxy and load balancer for pods running on the node by implementing load-balancing using NAT in iptables.
The kube-proxy process stands in between the Kubernetes network and the pods that are running on that particular node. It is responsible for ensuring that communication is maintained efficiently across all elements of the cluster. When a user creates a Kubernetes service object, the kube-proxy instance is responsible to translate that object into meaningful rules in the local iptables rule set on the worker node. iptables is used to translate the virtual IP assigned to the service object to all of the pod IPs mapped by the service.
i hope it's clear your idea about kube proxy.
lets see a example how it's works.
here i used headless service so that i can connect a specific pod.
---
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: default
spec:
clusterIP: None
selector:
app: my-test
ports:
- port: 80
name: rest
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: my-sts
spec:
serviceName: my-service
replicas: 3
selector:
matchLabels:
app: my-test
template:
metadata:
labels:
app: my-test
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: web
---
this will create 3 pods. as : my-sts-0, my-sts-1, my-sts-2. now if we want to connect to the pod my-sts-0 just use this dns name my-sts-0.my-service.default.svc:80 . and the service will resolve the dns name and will provide the exact podip of my-sts-0. now if you need to comminucate from my-sts-1 to my-sts-0, you can just use this dns name.
The template is like my_pod_name.my_Service_Name.my_Namespace.svc.cluster-domain.example , but you can skip the cluster-domain.example part. Only Service_Name.Namespace.svc will work fine.
ref
A few of our Pods access the Kubernetes API via the "kubernetes" Service. We're in the process of applying Network Policies which allow access to the K8S API, but the only way we've found to accomplish this is to query for the "kubernetes" Service's ClusterIP, and include it as an ipBlock within an egress rule within the Network Policy.
Specifically, this value:
kubectl get services kubernetes --namespace default -o jsonpath='{.spec.clusterIP}'
Is it possible for the "kubernetes" Service ClusterIP to change to a value other than what it was initialized with during cluster creation? If so, there's a possibility our configuration will break. Our hope is that it's not possible, but we're hunting for official supporting documentation.
The short answer is no.
More details :
You cannot change/edit clusterIP because it's immutable... so kubectl edit will not work for this field.
The service cluster IP can be changed easly by kubectl delete -f svc.yaml, then kubectl apply -f svc.yaml again.
Hence, never ever relies on service IP because services are designed to be referred by DNS :
Use service-name if the communicator is inside the same namespace
Use service-name.service-namespace if the communicator is inside or outside the same namespace.
Use service-name.service-namespace.svc.cluster.local for FQDN.
yes that is possible
if specify clusterIP in your service yaml file(Service.spec.clusterIP), the ip address of your service will not be random and always will be same. service yaml should be like this:
apiVersion: v1
kind: Service
metadata:
name: web
namespace: default
spec:
clusterIP: 10.96.0.100
ports:
- name: https
port: 443
protocol: TCP
targetPort: 80
type: ClusterIP
be careful ip you choose should be unassigned in your cluster.
I have a service (CusterIP) like following which is exposing ports of backend POD.
apiVersion: v1
kind: Service
metadata:
name: fsimulator
namespace: myns
spec:
type: ClusterIP
selector:
application: oms
ports:
- name: s-port
port: 9780
- name: b-port
port: 8780
Front end POD should be able to connect to Backend POD using service. Should we replace hostname with service name to connect from Frontend POD to Backend POD ?
I have to supply the service name and port through environment variables to Frontend POD container.
The enviroment variables are set using configMap.
Is it enough to give service name fsimulator as hostname to connect to ?
How to give service if is created inside namespace ?
Thanks
Check out this documentation. The internal service PORT / IP pairs for active services are indeed passed into the containers by default.
As the documentation also says, it is possible (recommended) to use a DNS cluster add-on for service discovery. Accessing service.namespace from outside / inside a service will resolve to the correct service route (or just service from inside the namespace). This is usually the right path to take.
Built-in service discovery is a huge perk of using Kubernetes, use the available tools if at all possible!
i have a application ( Let's say Integration-protocol-api)
and this application want to talk to other application, but this application located on another Network (Let's call it Another-Integration-Protocol)
And problem is, on another-integration-protocol side the whitelist exist, which allow to connect to it, only from selected ip addresses.
But my integration-protocol-api is Dockerized and running on Kubernetes cluster, so ip address is changing everyime when i restart my pod.
how can i assign the Public and Static ip to my Kubernetes Pod?
But my integration-protocol-api is Dockerized and running on Kubernetes cluster, so ip address is changing everyime when i restart my pod. how can i assign the Public and Static ip to my Kubernetes Pod
There are several approaches, depending on your actual setup/needs and I'll try to give some options here:
Tie pod to specific node and expose that node's IP address through service. This would be something along those lines:
# Quick deployment/pod manifest node selector (affinity is better)
...
spec:
nodeSelector:
kubernetes.io/hostname: my-node-name
...
# Service manifest
apiVersion: v1
kind: Service
metadata:
name: svc-myservice
labels:
app: myapp
tier: frontend
spec:
selector:
app: myapp
tier: frontend
ports:
- name: tcpserviceport
protocol: TCP
port: 8080
targetPort: 80
externalIPs:
- 111.222.222.111
Pod should be in same namespace, tied to that node via either node selector or affinity rules and have same labels as in selector for service to pick it up. IP address of cluster node with name my-node-name should be 111.222.222.111 in this example, and it would be accessible through port 8080 and that ip address.
If applicable, expose service through ingress and whitelist ingress public ip only. Depending on your namespace separation you'll reference your pod (wherever it might run) in ingress through corresponding service using either service name (in namespace scope) or FQDN such as:
<service-name>.<namespace-name>.svc.cluster.local
Here is good overview of some methods to make it more illustrative from kubernetes docs: https://kubernetes.io/docs/tutorials/kubernetes-basics/expose-intro/
A pod makes any request with it's node IP address as source. So you could whitelist your cluster nodes' IP addresses, and it should work.