IP provided by metallb is not accessible - kubernetes

I have a bare-metal Kubernetes cluster with metal LB.
I have used a GCP external Load balancer for an IP that I have assigned to Metallb.
Metallb has assigned that IP to service but still I am not able to access my service on that IP from outside Cluster.
apiVersion: v1
kind: Service
metadata:
name: nginxlb
spec:
type: LoadBalancer
selector:
app: nginx
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 34.149.177.23-34.149.177.23

Related

Kubernetes Service for selecting pods in two namespaces

I have a service "A" deployed in "X" namespace. "Z" service from "P" namespace, calls it on
svc-a.x.svc.cluster.local
I have to deploy staging of service "A" in Y namespace as well and I want to register these IPs under
svc-a.x.svc.cluster.local
Is there any way to do it? I want to the main service to select pods from different namespaces.
You can try using a Service without selectors with an EndPointSlice which refers to a Service from each namespace.
Create svc-a in namespace X which selects / points to pods in namespace X. The Service will be available at svc-a.x.svc.cluster.local.
Create svc-a in namespace Y which selects / points to pods in namespace Y. The Service will be available at svc-a.y.svc.cluster.local.
Create a svc-a in namespace Z without selectors.
apiVersion: v1
kind: Service
metadata:
name: svc-a
spec:
ports:
- protocol: TCP
port: 80
targetPort: 9376
The Service will be available at svc-a.z.svc.cluster.local.
Create an EndpointSlice in namespace Z with svc-a.x.svc.cluster.local and svc-a.y.svc.cluster.local as endpoints and attach it to svc-a:
apiVersion: discovery.k8s.io/v1
kind: EndpointSlice
metadata:
name: svc-a
labels:
kubernetes.io/service-name: svc-a
addressType: FQDN
ports:
- name: http
protocol: TCP
port: 80
endpoints:
- addresses:
- "svc-a.x.svc.cluster.local"
- "svc-a.y.svc.cluster.local"
So now you'll have svc-a.z.svc.cluster.local available in any namespace pointing to backends in both the X and Y namespaces.
You can use the ExternalName service type for this purpose. You can create new service which selects the pods from 'Y' namespace and use that service in 'A' by using the externalName.
Example: Assume new service name is 'svc-b' which is in Y namespace you can modify your 'A' service something like this
apiVersion: v1
kind: Service
metadata:
name: svc-a
namespace: X
spec:
type: ExternalName
externalName: svc-b.y.svc.cluster.local
ports:
- port: 80
Refer to this SO which helped in solving a similar issue.
As per Gari's suggestion i tried using the Endpoint slice but didn't work, looks like kube-proxy having an issue with internal virtual IP.
Note: The endpoint IPs must not be: loopback (127.0.0.0/8 for IPv4,
::1/128 for IPv6), or link-local (169.254.0.0/16 and 224.0.0.0/24 for
IPv4, fe80::/64 for IPv6).
The endpoint IP addresses cannot be the cluster IPs of other
Kubernetes Services, because kube-proxy doesn't support virtual IPs as
a destination.
Created 3 namespace test-1, test-2, test-main
You can go with Endpoints instead. Sharing the YAML example below
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
namespace: test-1
spec:
selector:
matchLabels:
run: my-nginx
replicas: 1
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: my-nginx
namespace: test-1
labels:
run: my-nginx
spec:
ports:
- port: 80
protocol: TCP
selector:
run: my-nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-app
namespace: test-2
spec:
selector:
matchLabels:
run: hello-app
replicas: 1
template:
metadata:
labels:
run: hello-app
spec:
containers:
- name: hello-app
image: gcr.io/google-samples/hello-app:1.0
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: hello-app
namespace: test-2
labels:
run: hello-app
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
selector:
run: hello-app
---
apiVersion: v1
kind: Service
metadata:
name: slice-svc
namespace: test-main
spec:
ports:
- port: 80
name: http
targetPort: 80
clusterIP: None
---
kind: Endpoints
apiVersion: v1
metadata:
name: slice-svc
namespace: test-main
subsets:
- addresses:
- ip: 10.102.24.29 #CluserIP of service nginx from namespace-1
- ip: 10.99.216.222 #CluserIP of service my-app from namespace-2
ports:
- port: 80
Note : ClusterIP get change when you delete and re-create the service else it's stable to use when you don't have a better way to resolve the IP in DNS.

tunnel for service target port empty kubernetes and can't access pod from local browser

apiVersion: apps/v1
kind: Deployment
metadata:
name: identityold-deployment
spec:
selector:
matchLabels:
app: identityold
replicas: 1
template:
metadata:
labels:
app: identityold
spec:
containers:
- name: identityold
image: <image name from docker hub>
ports:
- containerPort: 8081
---
apiVersion: v1
kind: Service
metadata:
labels:
app: identityold
name: identityold-svc
namespace: default
spec:
type: NodePort # use LoadBalancer as type here
ports:
- port: 80
targetPort: 8081
nodePort: 30036
selector:
app: identityold
The above code is my deployment YAML file.
and cant access from the browser the service
Exposing a service in minikube cluster is little bit different than in normal kubernetes cluster.
Please follow this guide from kubernetes documentation and use minikube service command in order to expose it properly.

GKE NodePort service refusing incoming traffic

I have created a Node port service in Google cloud with the following specification... I have a firewall rule created to allow traffic from 0.0.0.0/0 for the port '30100' ,I have verified stackdriver logs and traffic is allowed but when I either use curl or from browser to hit http://:30100 I am not getting any response. I couldn't proceed how to debug the issue also... can someone please suggest on this ?
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginxv1
template:
metadata:
labels:
app: nginxv1
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: nginxv1
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8080
nodePort: 30100
selector:
app: nginxv1
type: NodePort
Thanks.
You need to fix the container port, it must be 80 because the nginx container exposes this port as you can see here
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginxv1
template:
metadata:
labels:
app: nginxv1
spec:
containers:
- name: nginx
image: nginx:latest
---
apiVersion: v1
kind: Service
metadata:
name: nginxv1
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
nodePort: 30100
selector:
app: nginxv1
type: NodePort
Also, you need to create a firewall rule to permit the traffic to the node, as mentioned by #danyL in comments:
gcloud compute firewall-rules create test-node-port --allow tcp:30100
Get the node IP with the command
kubectl get nodes -owide
And them try to access the nginx page with:
curl http://<NODEIP>:30100

No load balancer created and static ip assigned to traefik ingress on GKE

When I set up an ingress controller to point to the traefik service, I expect load balancers to be created for that ingress controller on GKE in the same way a LoadBalancer service would. I could then point to the static ip created.
However, when I get my ingresses, there is no static IP assigned.
$ kubectl get ingresses -n kube-system
NAME HOSTS ADDRESS PORTS AGE
traefik-ingress traefik-ui.minikube 80 4m
traefik-ingress.yml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: traefik-ingress
namespace: kube-system
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: traefik-ui.minikube
http:
paths:
- path: "/"
backend:
serviceName: traefik-ingress-service
servicePort: 80
traefik-deployment.yml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: kube-system
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
- name: admin
containerPort: 8080
args:
- --api
- --kubernetes
- --logLevel=INFO
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
type: NodePort
You are creating a Service object for the traefik deployment, but you have used the NodePort type, which is only accesible from inside the cluster. If you want Kubernetes to create a LoadBalancer for a Service, you need to specify the type LoadBalancer in your service, so your traefik Service would look like
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
type: LoadBalancer
This will talk to the GKE API and create a LoadBalancer with an IP for you.

How to expose nginx on public Ip using NodePort service in Kubernetes?

I'm executing kubectl create -f nginx.yaml which creates the pods successfully. But the PODS aren't exposed on Public IP of my instance. Following is the YAML used be me with service type as nodeport:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
name: nginx
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
name: http
- port: 443
nodePort: 30443
name: https
selector:
name: nginx
What could be in-correct in my approach or above YAML file to expose the pod on deployment to the public IP?
PS: Firewall and ACLs are open to internet on all TCP
The endpoint was not getting added. On debugging I found the label between deployment and Service has a mismatch. Hence changed the label type from "app" to "name" and it worked.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
name: nginx
spec:
replicas: 3
selector:
matchLabels:
name: nginx
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
name: nginx
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
name: http
selector:
name: nginx
Jeel is right. Your Service selector is mismatch with Pod labels.
If you fix that like what Jeel already added in this answer
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
name: nginx
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
name: http
selector:
name: nginx
Your Service will be exposed in Node IP address. Because your Service Type is NodePort.
If your Node IP is, lets say, 35.226.16.207, you can connect to your Pod using this IP and NodePort
$ curl 35.226.16.207:30080
In this case, your node must have a public IP. Otherwise, you can't access
Second option, you can create LoadBalancer Type Service
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
name: nginx
spec:
type: LoadBalancer
ports:
- port: 80
name: http
selector:
name: nginx
This will provide you a public IP.
For more details, check this