Can I guarantee the "kubernetes" Service will retain a consistent ClusterIP following cluster creation even if I attempt to modify or recreate it? - kubernetes

A few of our Pods access the Kubernetes API via the "kubernetes" Service. We're in the process of applying Network Policies which allow access to the K8S API, but the only way we've found to accomplish this is to query for the "kubernetes" Service's ClusterIP, and include it as an ipBlock within an egress rule within the Network Policy.
Specifically, this value:
kubectl get services kubernetes --namespace default -o jsonpath='{.spec.clusterIP}'
Is it possible for the "kubernetes" Service ClusterIP to change to a value other than what it was initialized with during cluster creation? If so, there's a possibility our configuration will break. Our hope is that it's not possible, but we're hunting for official supporting documentation.

The short answer is no.
More details :
You cannot change/edit clusterIP because it's immutable... so kubectl edit will not work for this field.
The service cluster IP can be changed easly by kubectl delete -f svc.yaml, then kubectl apply -f svc.yaml again.
Hence, never ever relies on service IP because services are designed to be referred by DNS :
Use service-name if the communicator is inside the same namespace
Use service-name.service-namespace if the communicator is inside or outside the same namespace.
Use service-name.service-namespace.svc.cluster.local for FQDN.

yes that is possible
if specify clusterIP in your service yaml file(Service.spec.clusterIP), the ip address of your service will not be random and always will be same. service yaml should be like this:
apiVersion: v1
kind: Service
metadata:
name: web
namespace: default
spec:
clusterIP: 10.96.0.100
ports:
- name: https
port: 443
protocol: TCP
targetPort: 80
type: ClusterIP
be careful ip you choose should be unassigned in your cluster.

Related

Kubernetes open port to server on same subnet

I am launching a service in a Kubernetes pod that I would like to be available to servers on the same subnet only.
I have created a service with a LoadBalancer opening the desired ports. I can connect to these ports through other pods on the cluster, but I cannot connect from virtual machines I have running on the same subnet.
So far my best solution has been to assign a loadBalancerIP and restrict it with loadBalancerSourceRanges, however this still feels too public.
The virtual machines I am attempting to connect to my service are ephemeral, and have a wide range of public IPs assigned, so my loadBalancerSourceRanges feels too broad.
My understanding was that I could connect to the internal LoadBalancer cluster-ip from servers that were on that same subnet, however this does not seem to be the case.
Is there another solution to limit this service to connections from internal IPs that I am missing?
This is all running on GKE.
Any help would be really appreciated.
i think you are right here a little bit but not sure why you mentioned the cluster-ip
My understanding was that I could connect to the internal LoadBalancer
cluster-ip from servers that were on that same subnet, however this
does not seem to be the case.
Now if you have deployment running on GKE and you have exposed it with service type LoadBalancer and have internal LB you will be able to access to internal LB across same VPC.
apiVersion: v1
kind: Service
metadata:
name: internal-svc
annotations:
networking.gke.io/load-balancer-type: "Internal"
spec:
type: LoadBalancer
externalTrafficPolicy: Cluster
selector:
app: internal-svcinternal-svc
ports:
- name: tcp-port
protocol: TCP
port: 8080
targetPort: 8080
once your changes are applied check the status using
kubectl get service internal-svc --output yaml
In YAML output check at last section for
status:
loadBalancer:
ingress:
- ip: 10.127.40.241
that's your actual IP you can use to connect with service from other VMs in subnet.
Doc ref
To restrict the service to only be available to servers on the same subnet, you can use a combination of Network Policies and Service Accounts.
First, you'll need to create a Network Policy which specifies the source IP range that is allowed to access your service. To do this, you'll need to create a YAML file which contains the following:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-same-subnet-traffic
spec:
podSelector: {}
ingress:
- from:
- ipBlock:
cidr: <subnet-range-cidr>
ports:
- protocol: TCP
port: <port-number>
Replace the <subnet-range-cidr> and <port-number> placeholders with the relevant IP address range and port numbers. Once this YAML file is created, you can apply it to the cluster with the following command:
kubectl apply -f path-to-yaml-file
Next, you'll need a Service Account and assign it to the service. you can use the Service Account to authenticate incoming requests. To do this, you'll need to add a Service Account to the service's metadata with the following command:
kubectl edit service <service-name>
You must first create a Role or ClusterRole and grant it access to the network policy before you can assign a Service Account to it. The network policy will then be applied to the Service Account when you bind the Role or ClusterRole to it. This can be accomplished using the Kubernetes kubectl command line tool as follows:
kubectl create role <role_name> --verb=get --resource=networkpolicies
kubectl create clusterrole <clusterrole_name> --verb=get --resource=networkpolicies
kubectl create rolebinding <rolebinding_name> --role=<role_name> --serviceaccount=<service_account_name>
kubectl create clusterrolebinding <clusterrolebinding_name> --clusterrole=<clusterrole_name> --serviceaccount=<service_account_name>
The network policy will be applied to all pods that make use of the Service Account when the Role or ClusterRole is bound to it. To access the service, incoming requests will need to authenticate with the Service Account once it has been added. The service will only be accessible to authorized requests as a result of this.
For more info follow this documentation.

AKS Kubernetes questions

Can someone please explain how POD to POD works in AKS?
from the docs, I can see it uses kube proxy component to send the traffic to the desired POD.
But I have been told that I must use clusterIP service and bind all the relevant POD's together.
So what is real flow? Or I missed something. below a couple of questions to be more clear.
Questions:
how POD to POD inside one node can talk to each other? what is the flow?
how POD to POD inside a cluster (different nodes) can talk to each other? what is the flow?
if it's possible it will be highly appreciated if you can describe the flows for #1 and #2 in the deployment of kubenet and CNI.
Thanks a lot!
for pod to pod communication we use services. so first we need to understand,
why we need service: what actually do service for us that, they resolve the dns name and give us the the exact ip that we need to connect a specific pod. now as you want to communicate with pod to pod you need to create a ClusterIP service.
ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType. with ClusterIP service you can't access a pod from outside the cluster for this reason we use clusterip service if we want the communication between pod to pod only.
kube-proxy is the network proxy that runs on each node in your cluster.
it maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.
every service maintain iptables.And kube-proxy handled these ip tables for every service. so yes, kube-proxy is the most vital point for network setup in our k8s cluster.
how the network policy works in kubernetes:
all Pods can communicate with all other Pods without using network address translation (NAT).
all Nodes can communicate with all Pods without NAT.
the IP that a Pod sees itself as is the same IP that others see it as.
with those point:
Container-to-Container networking
Pod-to-Pod networking
Pod-to-Service networking
Internet-to-Service networking
It handles transmission of packets between pod to pods, and also with the outside world. It acts like a network proxy and load balancer for pods running on the node by implementing load-balancing using NAT in iptables.
The kube-proxy process stands in between the Kubernetes network and the pods that are running on that particular node. It is responsible for ensuring that communication is maintained efficiently across all elements of the cluster. When a user creates a Kubernetes service object, the kube-proxy instance is responsible to translate that object into meaningful rules in the local iptables rule set on the worker node. iptables is used to translate the virtual IP assigned to the service object to all of the pod IPs mapped by the service.
i hope it's clear your idea about kube proxy.
lets see a example how it's works.
here i used headless service so that i can connect a specific pod.
---
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: default
spec:
clusterIP: None
selector:
app: my-test
ports:
- port: 80
name: rest
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: my-sts
spec:
serviceName: my-service
replicas: 3
selector:
matchLabels:
app: my-test
template:
metadata:
labels:
app: my-test
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: web
---
this will create 3 pods. as : my-sts-0, my-sts-1, my-sts-2. now if we want to connect to the pod my-sts-0 just use this dns name my-sts-0.my-service.default.svc:80 . and the service will resolve the dns name and will provide the exact podip of my-sts-0. now if you need to comminucate from my-sts-1 to my-sts-0, you can just use this dns name.
The template is like my_pod_name.my_Service_Name.my_Namespace.svc.cluster-domain.example , but you can skip the cluster-domain.example part. Only Service_Name.Namespace.svc will work fine.
ref

How to get ip/address of service in k8s

I would like to create service A (redis instance) and service B (application).
Application would like to use service A (redis).
How can I get some automaticaly address/url of service A inside k8s cluster without expose to internet?
Something like:
redis://service-a-url:6379
I don't know which technic of k8s should I use.
So for example your redis service should look like this:
apiVersion: v1
kind: Service
metadata:
name: redis
labels:
run: redis
spec:
ports:
- port: 6379
targetPort: 6379
protocol: TCP
selector:
run: redis
The service is type ClusterIP (because if you will not specify service type in yaml file by default it will be ClusterIP type) that you don't have to access service from the outside the cluster. There are more types of service - find information here: services-kubernetes.
Take a look: connecting-app-service, app-service-redis.
Kubernetes supports two modes of finding a Service - environment variables and DNS.
Kubernetes has a specific DNS cluster addon Service that automatically assigns DNS names to other Services.
Every single Service created in the cluster has its own assigned DNS name. A client Pod's DNS search list will include the Pod's own namespace and the cluster's default domain by default. This is best illustrated by example:
Assume a Service named example in the Kubernetes namespace ns. A Pod running in namespace ns can look up this service by simply doing a DNS query for example. A Pod running in namespace test can look up this service by doing a DNS query for example.ns.
Find more here: Kubernetes DNS-Based Service Discovery, dns-name-service.
You will be able to access your service within the cluster using following record:
<service>.<ns>.svc.<zone>. <ttl>
For example: redis.default.svc.cluster.local

Provide Users access to applications installed in their namespaces

I need to create a k8s cluster with user having their own namespace and application installed in those namespace which they can access from a web-portal(e.g providing http://service_ip:service_port in case of jupyterhub) i am using helm charts to install applications and kind of confused with services types so i need your suggestion should i use nodeport or should i use clusterip and how i would discover and provide service url to users. any help would be appreciated.
Steps
Find the Service defined for the application.
Expose the Service either via either NodePort, LoadBalancer, or Ingress.
Reference
Kubernetes in Action Chapter 5. Services: enabling clients to discover and talk to pods
The diagrams are from the book:
NodePort
If the client can access the nodes directly or via tunnel (VPN or SSH tunnel), the expose the service as NodePort type.
To do so, use kubectl expose or kubectl edit to change the spec.type.
Example:
apiVersion: v1
kind: Service
metadata:
name: kubernetes-dashboard
namespace: kube-system
spec:
clusterIP: 10.100.96.203
ports:
- port: 443
protocol: TCP
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
sessionAffinity: None
type: ClusterIP <----- Change to NodePort (or LoadBalancer)
LoadBalancer
If the K8S is running in AWS, Azure, GCE, for which the K8S cloud providers are supported, then the service can be exposed via the load balancer DNS or IP (can be via the public Internet too, depending on the access configuration on the LB). Change the service spec.type to LoadBalancer.
For AWS cloud provider, refer to K8S AWS Cloud Provider Notes.
Ingress
K8S ingress offers a way to access via hostname and TLS. Similar to OpenShift Route.

How can I access Concourse built with helm outside of the cluster?

I am using the concourse helm build provided at https://github.com/kubernetes/charts/tree/master/stable/concourse to setup concourse inside of our kubernetes cluster. I have been able to get the setup working and I am able to access it within the cluster but I am having trouble accessing it outside the cluster. The notes from the build show that I can just use kubectl port-forward to get to the webpage but I don't want to have all of the developers have to forward the port just to get to the web ui. I have tried creating a service that has a node port like this:
apiVersion: v1
kind: Service
metadata:
name: concourse
namespace: concourse-ci
spec:
ports:
- port: 8080
name: atc
nodePort: 31080
- port: 2222
name: tsa
nodePort: 31222
selector:
app: concourse-web
type: NodePort
This allows me to get to the webpage and interact with it in most ways but then when I try to look at build status it never loads the events that happened. Instead a network request for /api/v1/builds/1/events is stuck in pending and the steps of the build never load. Any ideas what I can do to be able to completely access concourse external to the cluster?
EDIT: It seems like the events network request normally responds with a text/event-stream data type and maybe the Kubernetes service isn't handling an event stream correctly. Or there is something about concourse that handles event-streams different than the norm.
After plenty of investigation I have found that the the nodePort service is actually working and it is just my antivirus (Sophos) that is silently blocking the response from the events request.
Also, you can expose your port through loadbalancer in kubernetes.
kubectl get deployments
kubectl expose deployment <web pod name> --port=80 --target-port=8080 --name=expoport --type=LoadBalancer
It will create a public IP for you, and you will be able to access concourse on port 80.
not sure since I'm also a newbie but... you can configure your chart by providing your own version of https://github.com/kubernetes/charts/blob/master/stable/concourse/values.yaml
helm install stable/concourse -f custom_values.yaml
there is a 'externalURL' param, maybe worth trying to set it to your URL
## URL used to reach any ATC from the outside world.
##
# externalURL:
In addition, ... if you are on GKE, .... you can use an internal loadbalancer, ... set it up in your values.yaml file
service:
## For minikube, set this to ClusterIP, elsewhere use LoadBalancer or NodePort
## ref: https://kubernetes.io/docs/user-guide/services/#publishing-services---service-types
##
#type: ClusterIP
type: LoadBalancer
## When using web.service.type: LoadBalancer, sets the user-specified load balancer IP
# loadBalancerIP: 172.217.1.174
## Annotations to be added to the web service.
##
annotations:
# May be used in example for internal load balancing in GCP:
cloud.google.com/load-balancer-type: Internal