How to access service created in another namespace - kubernetes

I'm having issues when accessing a service present in another namespace.
I have 2 namespaces (in the same cluster) airflow-dev and dask-dev.
In dask-dev namespace, I have dask cluster(dask scheduler and workers) deployed. Also, created a service (cluster IP) to dask-scheduler pod. I'm able to access dask-scheduler pod from chrome using 'kubectl port-forward' command.
kubectl port-forward --namespace dask-dev svc/dask-dev-scheduler 5002:80
However, am not able to access the service (or dask-scheduler pod) from a pod (airflow-scheduler) present in airflow-dev namespace. Getting 'Host or service not found' error when trying to access it using the below
dask-dev-scheduler.dask-dev.svc.cluster.local:8786
Below is the service that I have created for dask-dev-scheduler. Could you please let me know how to access the service from airflow-dev namespace.
apiVersion: v1
metadata:
name: dask-dev-scheduler
namespace: dask-dev
labels:
app: dask-dev
app.kubernetes.io/managed-by: Helm
chart: dask-dev-4.5.7
component: scheduler
heritage: Helm
release: dask-dev
annotations:
meta.helm.sh/release-name: dask-dev
meta.helm.sh/release-namespace: dask-dev
spec:
ports:
- name: dask-dev-scheduler
protocol: TCP
port: 8786
targetPort: 8786
- name: dask-dev-webui
protocol: TCP
port: 80
targetPort: 8787
selector:
app: dask-dev
component: scheduler
release: dask-dev
clusterIP: 10.0.249.111
type: ClusterIP
sessionAffinity: None
status:
loadBalancer: {}
[1]: https://i.stack.imgur.com/UrA7u.jpg

You can use a local service to reference an external service (a service in a different namespace) using the service externalName Type.
ExternalName services do not have selectors, or any defined ports or endpoints, therefore, you can use an ExternalName service to direct traffic to an external service.
apiVersion: v1
kind: Service
metadata:
name: service-b
namespace: namespace-b
spec:
selector:
app: my-app-b
ports:
- protocol: TCP
port: 3000
targetPort: 3000
apiVersion: v1
kind: Service
metadata:
name: service-b-ref
namespace: namespace-a
spec:
type: ExternalName
externalName: service-b.namespace-b.svc.cluster.local
Any traffic in namespace-a that connects to service-b-ref:<port> will be routed to service-b in namespace-b (service-b.namespace-b.svc.cluster.local)
Therefore, a call to service-b-ref:3000 will route to our service-b.
In your example, you'd just need to create a service in airflow-dev that will route traffic to the dask-dev-scheduler in the dask-dev namespace:
apiVersion: v1
kind: Service
metadata:
name: dask-dev-svc
namespace: airflow-dev
spec:
type: ExternalName
externalName: dask-dev-scheduler.dask-dev.svc.cluster.local
Therefore, all airflow-dev resources that need to connect to the dask-dev-scheduler would call: dask-dev-svc:8786
apiVersion: v1
metadata:
name: dask-dev-scheduler
namespace: dask-dev
spec:
ports:
- name: dask-dev-scheduler
protocol: TCP
port: 8786
targetPort: 8786
# ...
selector:
app: dask-dev

The cluster domain doesn't always have to be cluster.local. Try just using dask-dev-scheduler.dask-dev.svc. Assuming Airflow respects the ndots lookup strategy in the generated resolv.conf mounted into the pod, that should find it.

Are you upgrading coredns to 1.8.3? I have just faced same issue with my eks cluster after upgrading from 1.19 to 1.20.
Downgrading coredns to 1.8.0 solved the issue.

Related

Kubernetes Service for selecting pods in two namespaces

I have a service "A" deployed in "X" namespace. "Z" service from "P" namespace, calls it on
svc-a.x.svc.cluster.local
I have to deploy staging of service "A" in Y namespace as well and I want to register these IPs under
svc-a.x.svc.cluster.local
Is there any way to do it? I want to the main service to select pods from different namespaces.
You can try using a Service without selectors with an EndPointSlice which refers to a Service from each namespace.
Create svc-a in namespace X which selects / points to pods in namespace X. The Service will be available at svc-a.x.svc.cluster.local.
Create svc-a in namespace Y which selects / points to pods in namespace Y. The Service will be available at svc-a.y.svc.cluster.local.
Create a svc-a in namespace Z without selectors.
apiVersion: v1
kind: Service
metadata:
name: svc-a
spec:
ports:
- protocol: TCP
port: 80
targetPort: 9376
The Service will be available at svc-a.z.svc.cluster.local.
Create an EndpointSlice in namespace Z with svc-a.x.svc.cluster.local and svc-a.y.svc.cluster.local as endpoints and attach it to svc-a:
apiVersion: discovery.k8s.io/v1
kind: EndpointSlice
metadata:
name: svc-a
labels:
kubernetes.io/service-name: svc-a
addressType: FQDN
ports:
- name: http
protocol: TCP
port: 80
endpoints:
- addresses:
- "svc-a.x.svc.cluster.local"
- "svc-a.y.svc.cluster.local"
So now you'll have svc-a.z.svc.cluster.local available in any namespace pointing to backends in both the X and Y namespaces.
You can use the ExternalName service type for this purpose. You can create new service which selects the pods from 'Y' namespace and use that service in 'A' by using the externalName.
Example: Assume new service name is 'svc-b' which is in Y namespace you can modify your 'A' service something like this
apiVersion: v1
kind: Service
metadata:
name: svc-a
namespace: X
spec:
type: ExternalName
externalName: svc-b.y.svc.cluster.local
ports:
- port: 80
Refer to this SO which helped in solving a similar issue.
As per Gari's suggestion i tried using the Endpoint slice but didn't work, looks like kube-proxy having an issue with internal virtual IP.
Note: The endpoint IPs must not be: loopback (127.0.0.0/8 for IPv4,
::1/128 for IPv6), or link-local (169.254.0.0/16 and 224.0.0.0/24 for
IPv4, fe80::/64 for IPv6).
The endpoint IP addresses cannot be the cluster IPs of other
Kubernetes Services, because kube-proxy doesn't support virtual IPs as
a destination.
Created 3 namespace test-1, test-2, test-main
You can go with Endpoints instead. Sharing the YAML example below
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
namespace: test-1
spec:
selector:
matchLabels:
run: my-nginx
replicas: 1
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: my-nginx
namespace: test-1
labels:
run: my-nginx
spec:
ports:
- port: 80
protocol: TCP
selector:
run: my-nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-app
namespace: test-2
spec:
selector:
matchLabels:
run: hello-app
replicas: 1
template:
metadata:
labels:
run: hello-app
spec:
containers:
- name: hello-app
image: gcr.io/google-samples/hello-app:1.0
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: hello-app
namespace: test-2
labels:
run: hello-app
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
selector:
run: hello-app
---
apiVersion: v1
kind: Service
metadata:
name: slice-svc
namespace: test-main
spec:
ports:
- port: 80
name: http
targetPort: 80
clusterIP: None
---
kind: Endpoints
apiVersion: v1
metadata:
name: slice-svc
namespace: test-main
subsets:
- addresses:
- ip: 10.102.24.29 #CluserIP of service nginx from namespace-1
- ip: 10.99.216.222 #CluserIP of service my-app from namespace-2
ports:
- port: 80
Note : ClusterIP get change when you delete and re-create the service else it's stable to use when you don't have a better way to resolve the IP in DNS.

Cannot connect to Kubernetes NodePort Service

I have a running pod that was created with the following pod-definition.yaml:
apiVersion: v1
kind: Pod
metadata:
name: microservice-one-pod-name
labels:
app: microservice-one-app-label
type: front-end
spec:
containers:
- name: microservice-one
image: vismarkjuarez1994/microserviceone
ports:
- containerPort: 2019
I then created a Service using the following service-definition.yaml:
kind: Service
apiVersion: v1
metadata:
name: microserviceone-service
spec:
ports:
- port: 30008
targetPort: 2019
protocol: TCP
selector:
app: microservice-one-app-label
type: NodePort
I then ran kubectl describe node minikube to find the Node IP I should be connecting to -- which yielded:
Addresses:
InternalIP: 192.168.49.2
Hostname: minikube
But I get no response when I run the following curl command:
curl 192.168.49.2:30008
The request also times out when I try to access 192.168.49.2:30008 from a browser.
The pod logs show that the container is up and running. Why can't I access my Service?
The problem is that you are trying to access your service at the port parameter which is the internal port at which the service will be exposed, even when using NodePort type.
The parameter you were searching is called nodePort, which can optionally be specified together with port and targetPort. Quoting the documentation:
By default and for convenience, the Kubernetes control plane will
allocate a port from a range (default: 30000-32767)
Since you didn't specify the nodePort, one in the range was automatically picked up. You can check which one by:
kubectl get svc -owide
And then access your service externally at that port.
As an alternative, you can change your service definition to be something like:
kind: Service
apiVersion: v1
metadata:
name: microserviceone-service
spec:
ports:
- port: 30008
targetPort: 2019
nodePort: 30008
protocol: TCP
selector:
app: microservice-one-app-label
type: NodePort
But take in mind that you may need to delete your service and create it again in order to change the nodePort allocated.
I think you missed the Port in your service:
apiVersion: v1
kind: Pod
metadata:
name: microservice-one-pod-name
labels:
app: microservice-one-app-label
type: front-end
spec:
containers:
- name: microservice-one
image: vismarkjuarez1994/microserviceone
ports:
- containerPort: 2019
and your service should be like this:
kind: Service
apiVersion: v1
metadata:
name: microserviceone-service
spec:
ports:
- port: 2019
targetPort: 2019
nodePort: 30008
protocol: TCP
selector:
app: microservice-one-app-label
type: NodePort
You can access to your app after enabling the Minikube ingress if you want trying Ingress with Minikube.
minikube addons enable ingress

Cannot access file inside Kubernetes cluster that has load balancer externally

I have the cluster setup below in AKS
apiVersion: apps/v1
kind: Deployment
metadata:
name: hpa-example
spec:
replicas: 3
selector:
matchLabels:
app: hpa-example
template:
metadata:
labels:
app: hpa-example
spec:
containers:
- name: hpa-example
image: gcr.io/google_containers/hpa-example
ports:
- name: http-port
containerPort: 80
resources:
requests:
cpu: 200m
---
apiVersion: v1
kind: Service
metadata:
name: hpa-example
spec:
ports:
- port: 31001
nodePort: 31001
targetPort: http-port
protocol: TCP
selector:
app: hpa-example
type: NodePort
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: hpa-example-autoscaler
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: hpa-example
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 50
The idea of this is to check AutoScaling
I need to have this available externally so I added
apiVersion: v1
kind: Service
metadata:
name: load-balancer-autoscaler
spec:
selector:
app: hpa-example
ports:
- port: 31001
targetPort: 31001
type: LoadBalancer
This now gives me an external IP however, I cannot connect to it in Postman or via a browser
What have I missed?
I have tried to change the ports between 80 and 31001 but that makes no difference
As posted by user #David Maze:
What's the exact URL you're trying to connect to? What error do you get? (On the load-balancer-autoscaler service, the targetPort needs to match the name or number of a ports: in the pod, or you could just change the hpa-example service to type: LoadBalancer.)
I reproduced your scenario and found out issue in your configuration that could deny your ability to connect to this Deployment.
From the perspective of Deployment and Service of type NodePort everything seems to work okay.
If it comes to the Service of type LoadBalancer on the other hand:
apiVersion: v1
kind: Service
metadata:
name: load-balancer-autoscaler
spec:
selector:
app: hpa-example
ports:
- port: 31001
targetPort: 31001 # <--- CULPRIT
type: LoadBalancer
This definition will send your traffic directly to the pods on port 31001 and it should send it to the port 80 (this is the port your app is responding on). You can change it either by:
targetPort: 80
targetPort: http-port
You could also change the Service of the NodePort (hpa-example) to LoadBalancer as pointed by user #David Maze!
After changing this definition you will be able to run:
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
load-balancer-autoscaler LoadBalancer 10.4.32.146 AA.BB.CC.DD 31001:31497/TCP 9m41s
curl AA.BB.CC.DD:31001 and get the reply of OK!
I encourage you to look on the additional resources regarding Kubernetes services:
Docs.microsoft.com: AKS: Network: Services
Stackoverflow.com: Questions: Difference between nodePort and LoadBalancer service types
Kubernetes.io: Docs: Concepts: Service

GCP Couldn't reach Kubernetes External Load Balancer IP from outside

I have a cluster created in the GCP cloud having a simple k8s YAML file.
apiVersion: v1
kind: Service
metadata:
name: lb-svc
labels:
app: lb-demo
spec:
type: LoadBalancer
ports:
- port: 8080
selector:
app: np-demo
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: np-deploy
spec:
replicas: 3
selector:
matchLabels:
app: np-demo
template:
metadata:
labels:
app: np-demo
spec:
containers:
- name: np-pod
image: nigelpoulton/k8s-deep-dive:0.1
imagePullPolicy: Always
ports:
- containerPort: 8080
Now; this YAML configuration has a LoadBalancer service which in return exposes an external IP address to the public.
thus we can see the external IP address using:
kubectl get svc
The issue is, I can easily access the load balancer using curl within the cloud shell but couldn't reach it when trying to access it from outside (example browser).
Tried:
curl external-ip:8080
Any help?
Your service ip only accessible to local VPC, if you need to expose service or ingress you need reserve a static ip, read here to reserve a static ip https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address
To assign your static ip to service, you need set loadBalancerIP on your service configuration
apiVersion: v1
kind: Service
metadata:
name: lb-svc
labels:
app: lb-demo
spec:
type: LoadBalancer
loadBalancerIP: <your reserved ip>
ports:
- port: 8080
selector:
app: np-demo
To assign your ip to ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: <name of reserved static ip>
labels:
app: my-app
spec:
backend:
serviceName: lb-svc
servicePort: 8080
read more here

Kubernetes deployment not publicly accesible

im trying to access a deloyment on our Kubernetes cluster on Azure. This is a Azure Kubernetes Service (AKS). Here are the configuration files for the deployment and the service that should expose the deployment.
Configurations
apiVersion: apps/v1
kind: Deployment
metadata:
name: mira-api-deployment
spec:
replicas: 1
selector:
matchLabels:
app: mira-api
template:
metadata:
labels:
app: mira-api
spec:
containers:
- name: backend
image: registry.gitlab.com/izit/mira-backend
ports:
- containerPort: 8080
name: http
protocol: TCP
imagePullSecrets:
- name: regcred
apiVersion: v1
kind: Service
metadata:
name: mira-api-service
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
run: mira-api
When I check the cluster after applying these configurations I, I see the pod running correctly. Also the service is created and has public IP assigned.
After this deployment I don't see any requests getting handled. I get a error message in my browser saying the site is inaccessible. Any ideas what I could have configured wrong?
Your service selector labels and pod labels do not match.
You have app: mira-api label in deployment's pod template but have run: mira-api in service's label selector.
Change your service selector label to match the pod label as follows.
apiVersion: v1
kind: Service
metadata:
name: mira-api-service
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: mira-api
To make sure your service is selecting the backend pods or not, you can run kubectl describe svc <svc name> command and check if it has any Endpoints listed.
# kubectl describe svc postgres
Name: postgres
Namespace: default
Labels: app=postgres
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"postgres"},"name":"postgres","namespace":"default"},"s...
Selector: app=postgres
Type: ClusterIP
IP: 10.106.7.183
Port: default 5432/TCP
TargetPort: 5432/TCP
Endpoints: 10.244.2.117:5432 <------- This line
Session Affinity: None
Events: <none>