Kubernetes Service for selecting pods in two namespaces - kubernetes

I have a service "A" deployed in "X" namespace. "Z" service from "P" namespace, calls it on
svc-a.x.svc.cluster.local
I have to deploy staging of service "A" in Y namespace as well and I want to register these IPs under
svc-a.x.svc.cluster.local
Is there any way to do it? I want to the main service to select pods from different namespaces.

You can try using a Service without selectors with an EndPointSlice which refers to a Service from each namespace.
Create svc-a in namespace X which selects / points to pods in namespace X. The Service will be available at svc-a.x.svc.cluster.local.
Create svc-a in namespace Y which selects / points to pods in namespace Y. The Service will be available at svc-a.y.svc.cluster.local.
Create a svc-a in namespace Z without selectors.
apiVersion: v1
kind: Service
metadata:
name: svc-a
spec:
ports:
- protocol: TCP
port: 80
targetPort: 9376
The Service will be available at svc-a.z.svc.cluster.local.
Create an EndpointSlice in namespace Z with svc-a.x.svc.cluster.local and svc-a.y.svc.cluster.local as endpoints and attach it to svc-a:
apiVersion: discovery.k8s.io/v1
kind: EndpointSlice
metadata:
name: svc-a
labels:
kubernetes.io/service-name: svc-a
addressType: FQDN
ports:
- name: http
protocol: TCP
port: 80
endpoints:
- addresses:
- "svc-a.x.svc.cluster.local"
- "svc-a.y.svc.cluster.local"
So now you'll have svc-a.z.svc.cluster.local available in any namespace pointing to backends in both the X and Y namespaces.

You can use the ExternalName service type for this purpose. You can create new service which selects the pods from 'Y' namespace and use that service in 'A' by using the externalName.
Example: Assume new service name is 'svc-b' which is in Y namespace you can modify your 'A' service something like this
apiVersion: v1
kind: Service
metadata:
name: svc-a
namespace: X
spec:
type: ExternalName
externalName: svc-b.y.svc.cluster.local
ports:
- port: 80
Refer to this SO which helped in solving a similar issue.

As per Gari's suggestion i tried using the Endpoint slice but didn't work, looks like kube-proxy having an issue with internal virtual IP.
Note: The endpoint IPs must not be: loopback (127.0.0.0/8 for IPv4,
::1/128 for IPv6), or link-local (169.254.0.0/16 and 224.0.0.0/24 for
IPv4, fe80::/64 for IPv6).
The endpoint IP addresses cannot be the cluster IPs of other
Kubernetes Services, because kube-proxy doesn't support virtual IPs as
a destination.
Created 3 namespace test-1, test-2, test-main
You can go with Endpoints instead. Sharing the YAML example below
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
namespace: test-1
spec:
selector:
matchLabels:
run: my-nginx
replicas: 1
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: my-nginx
namespace: test-1
labels:
run: my-nginx
spec:
ports:
- port: 80
protocol: TCP
selector:
run: my-nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-app
namespace: test-2
spec:
selector:
matchLabels:
run: hello-app
replicas: 1
template:
metadata:
labels:
run: hello-app
spec:
containers:
- name: hello-app
image: gcr.io/google-samples/hello-app:1.0
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: hello-app
namespace: test-2
labels:
run: hello-app
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
selector:
run: hello-app
---
apiVersion: v1
kind: Service
metadata:
name: slice-svc
namespace: test-main
spec:
ports:
- port: 80
name: http
targetPort: 80
clusterIP: None
---
kind: Endpoints
apiVersion: v1
metadata:
name: slice-svc
namespace: test-main
subsets:
- addresses:
- ip: 10.102.24.29 #CluserIP of service nginx from namespace-1
- ip: 10.99.216.222 #CluserIP of service my-app from namespace-2
ports:
- port: 80
Note : ClusterIP get change when you delete and re-create the service else it's stable to use when you don't have a better way to resolve the IP in DNS.

Related

mentioned ip to endpoints are not getting configured in k8s

i am trying to add IPs manually using endpoint object in yaml. however minikube cluster is getting its defaults ips of endpoints instead of mention in the yaml file. why?
yamlfile:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx-app
spec:
replicas: 3
selector:
matchLabels:
app: nginx-app
template:
metadata:
labels:
app: nginx-app
spec:
containers:
- name: nginx-container
image: nginx:1.16
ports:
- containerPort: 80
---
apiVersion: v1
kind: Endpoints
metadata:
name: nginx-service
subsets:
- ports:
- port: 80
addresses:
- ip: 172.17.0.11 ---> configured ip
- ip: 172.17.0.12 ---> configured ip
- ip: 172.17.0.13 ---> configured ip
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
selector:
app: nginx-app
ports:
- protocol: TCP
nodePort: 30464
port: 90
targetPort: 80
ips in endpoint output: (see 172.17.0.6, 172.17.0.7 and 172.17.0.8 while i have given 172.17.0.11, 172.17.0.12 and 172.17.0.13 in yaml)
/home/ravi/k8s>kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 192.168.49.2:8443 36h
nginx-service 172.17.0.6:80,172.17.0.7:80,172.17.0.8:80 5m59s
I have tried replicating your issue and got the configured IP addresses for endpoints.
The changes might have occurred due to the namespaces also, Once check it .

IP provided by metallb is not accessible

I have a bare-metal Kubernetes cluster with metal LB.
I have used a GCP external Load balancer for an IP that I have assigned to Metallb.
Metallb has assigned that IP to service but still I am not able to access my service on that IP from outside Cluster.
apiVersion: v1
kind: Service
metadata:
name: nginxlb
spec:
type: LoadBalancer
selector:
app: nginx
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 34.149.177.23-34.149.177.23

Cannot access file inside Kubernetes cluster that has load balancer externally

I have the cluster setup below in AKS
apiVersion: apps/v1
kind: Deployment
metadata:
name: hpa-example
spec:
replicas: 3
selector:
matchLabels:
app: hpa-example
template:
metadata:
labels:
app: hpa-example
spec:
containers:
- name: hpa-example
image: gcr.io/google_containers/hpa-example
ports:
- name: http-port
containerPort: 80
resources:
requests:
cpu: 200m
---
apiVersion: v1
kind: Service
metadata:
name: hpa-example
spec:
ports:
- port: 31001
nodePort: 31001
targetPort: http-port
protocol: TCP
selector:
app: hpa-example
type: NodePort
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: hpa-example-autoscaler
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: hpa-example
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 50
The idea of this is to check AutoScaling
I need to have this available externally so I added
apiVersion: v1
kind: Service
metadata:
name: load-balancer-autoscaler
spec:
selector:
app: hpa-example
ports:
- port: 31001
targetPort: 31001
type: LoadBalancer
This now gives me an external IP however, I cannot connect to it in Postman or via a browser
What have I missed?
I have tried to change the ports between 80 and 31001 but that makes no difference
As posted by user #David Maze:
What's the exact URL you're trying to connect to? What error do you get? (On the load-balancer-autoscaler service, the targetPort needs to match the name or number of a ports: in the pod, or you could just change the hpa-example service to type: LoadBalancer.)
I reproduced your scenario and found out issue in your configuration that could deny your ability to connect to this Deployment.
From the perspective of Deployment and Service of type NodePort everything seems to work okay.
If it comes to the Service of type LoadBalancer on the other hand:
apiVersion: v1
kind: Service
metadata:
name: load-balancer-autoscaler
spec:
selector:
app: hpa-example
ports:
- port: 31001
targetPort: 31001 # <--- CULPRIT
type: LoadBalancer
This definition will send your traffic directly to the pods on port 31001 and it should send it to the port 80 (this is the port your app is responding on). You can change it either by:
targetPort: 80
targetPort: http-port
You could also change the Service of the NodePort (hpa-example) to LoadBalancer as pointed by user #David Maze!
After changing this definition you will be able to run:
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
load-balancer-autoscaler LoadBalancer 10.4.32.146 AA.BB.CC.DD 31001:31497/TCP 9m41s
curl AA.BB.CC.DD:31001 and get the reply of OK!
I encourage you to look on the additional resources regarding Kubernetes services:
Docs.microsoft.com: AKS: Network: Services
Stackoverflow.com: Questions: Difference between nodePort and LoadBalancer service types
Kubernetes.io: Docs: Concepts: Service

No load balancer created and static ip assigned to traefik ingress on GKE

When I set up an ingress controller to point to the traefik service, I expect load balancers to be created for that ingress controller on GKE in the same way a LoadBalancer service would. I could then point to the static ip created.
However, when I get my ingresses, there is no static IP assigned.
$ kubectl get ingresses -n kube-system
NAME HOSTS ADDRESS PORTS AGE
traefik-ingress traefik-ui.minikube 80 4m
traefik-ingress.yml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: traefik-ingress
namespace: kube-system
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: traefik-ui.minikube
http:
paths:
- path: "/"
backend:
serviceName: traefik-ingress-service
servicePort: 80
traefik-deployment.yml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: kube-system
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
- name: admin
containerPort: 8080
args:
- --api
- --kubernetes
- --logLevel=INFO
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
type: NodePort
You are creating a Service object for the traefik deployment, but you have used the NodePort type, which is only accesible from inside the cluster. If you want Kubernetes to create a LoadBalancer for a Service, you need to specify the type LoadBalancer in your service, so your traefik Service would look like
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
type: LoadBalancer
This will talk to the GKE API and create a LoadBalancer with an IP for you.

How to expose nginx on public Ip using NodePort service in Kubernetes?

I'm executing kubectl create -f nginx.yaml which creates the pods successfully. But the PODS aren't exposed on Public IP of my instance. Following is the YAML used be me with service type as nodeport:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
name: nginx
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
name: http
- port: 443
nodePort: 30443
name: https
selector:
name: nginx
What could be in-correct in my approach or above YAML file to expose the pod on deployment to the public IP?
PS: Firewall and ACLs are open to internet on all TCP
The endpoint was not getting added. On debugging I found the label between deployment and Service has a mismatch. Hence changed the label type from "app" to "name" and it worked.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
name: nginx
spec:
replicas: 3
selector:
matchLabels:
name: nginx
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
name: nginx
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
name: http
selector:
name: nginx
Jeel is right. Your Service selector is mismatch with Pod labels.
If you fix that like what Jeel already added in this answer
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
name: nginx
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
name: http
selector:
name: nginx
Your Service will be exposed in Node IP address. Because your Service Type is NodePort.
If your Node IP is, lets say, 35.226.16.207, you can connect to your Pod using this IP and NodePort
$ curl 35.226.16.207:30080
In this case, your node must have a public IP. Otherwise, you can't access
Second option, you can create LoadBalancer Type Service
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
name: nginx
spec:
type: LoadBalancer
ports:
- port: 80
name: http
selector:
name: nginx
This will provide you a public IP.
For more details, check this