Routing traffic from GCP VM to VPC Native Cloud DNS GKE Cluster - kubernetes

I'm trying to achieve the following scenario:
My VM should be able to connect over a ClusterIP Service to the Pods behind it. The new beta feature here looks like this should be possible, but maybe I'm doing something wrong or misunderstood something...
The full documentation is here:
https://cloud.google.com/kubernetes-engine/docs/how-to/cloud-dns?hl=de#vpc_scope_dns
The DNS is working, I get a service IP, a route to the default network is available.
I can connect via pod IP. But it seems the service IP is not routable from outside the cluster. I know, that normally a ClusterIP is not available from outside the cluster. But then, I don't understand why this feature exists and it also does not match with the diagram provided in the docs. As this feature seems to provide cross-cluster/VM communication via services.
apiVersion: v1
kind: Service
metadata:
name: my-cip-service
namespace: default
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
run: load-balancer-example
sessionAffinity: None
type: ClusterIP
Do I understand the feature wrong or am I missing a configuration?
Traffic Check:

This only works with headless Kubernetes services. So you'll need to modify your service spec to included clusterIP: None:
apiVersion: v1
kind: Service
metadata:
name: my-cip-service
namespace: default
spec:
clusterIP: None
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
run: load-balancer-example
sessionAffinity: None
type: ClusterIP

Related

issue accesing one service from another kubernetes

Trying to connect to one service from another in the same namespace. Using ClusterIP for creating the service. once the service is created use that Ip to access the service. Requests are successful sometimes and sometimes it is failing, I see both the pods are up and running. Below is the service configuration
apiVersion: v1 kind: Service metadata: name: serviceA spec: selector: app: ServiceA ports: - name: http port: 80 targetPort: 8080 type: ClusterIP
apiVersion: v1 kind: Service metadata: name: serviceB spec: selector: app: ServiceB ports: - name: http port: 80 targetPort: 8123 type: ClusterIP
Please use service name in invoking it as below
http://serviceA:80
K8s offers DNS for Services and Pods
Kubernetes creates DNS records for Services and Pods. You can contact Services with consistent DNS names instead of IP addresses.

How can I access to services outside the cluster using kubectl proxy?

When we spin up a cluster with kubeadm in kubernetes, and the service's .yaml file looks like this :
apiVersion: v1
kind: Service
metadata:
name: neo4j
labels:
app: neo4j
component: core
spec:
clusterIP: None
ports:
- port: 7474
targetPort: 7474
name: browser
- port: 6362
targetPort: 6362
name: backup
selector:
app: neo4j
component: core
After all pods and services run, I do kubectl proxy and it says :
Starting to serve on 127.0.0.1:8001
So when I want to access to this service like :
curl localhost:8001/api/
it's just reachable inside the cluster! How can I reach to services outside the cluster?
You should expose your service using NodePort:
apiVersion: v1
kind: Service
metadata:
name: neo4j
labels:
app: neo4j
component: core
spec:
externalTrafficPolicy: Local
type: NodePort
ports:
- port: 7474
targetPort: 7474
name: browser
- port: 6362
targetPort: 6362
name: backup
selector:
app: neo4j
component: core
Now if you describe your service using
kubectl describe svc neo4j
You will get a nodeport value which will be in between 30000-32767 and you can access your service from outside the cluster using
curl http://<node_ip>:<node_port>
Hope this helps.
EDIT: Yes you can't directly use clusterIP: None in case of exposing service through NodePort. Now clusterIP: None means there is no internal load balancing done by kubernetes and for that we can also use externalTrafficPolicy=Local in service definition.
Alternatively, you might be able to use an ingress to route traffic to the correct Service.

Why don't my Kubernetes services publish on the port I specify?

I've been tinkering with Kubernetes on and off for the past few years and I am not sure if this has always been the case (maybe this behavior changed recently) but I cannot seem to get Services to publish on the ports I intend - they always publish on a high random port (>30000).
For instance, I'm going through this walkthrough on Ingress and I create the following Deployment and Service objects per the instructions:
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: hello-world-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: hello-world
spec:
containers:
- image: "gokul93/hello-world:latest"
imagePullPolicy: Always
name: hello-world-container
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: hello-world-svc
spec:
ports:
- port: 9376
protocol: TCP
targetPort: 8080
selector:
app: hello-world
type: NodePort
According to this, I should have a Service that's listening on port 8080, but instead it's a high, random port:
~$ kubectl describe svc hello-world-svc
Name: hello-world-svc
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=hello-world
Type: NodePort
IP: 10.109.24.16
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 31669/TCP
Endpoints: 10.40.0.4:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I also verified that none of my nodes are listening on 8080, but they are listening on 31669.
This isn't super ideal - especially considering that the Ingress portion will need to know what servicePort is being used (the walkthrough references this at 8080).
By the way, when I create the Ingress controller, this behavior is the same - rather than listening on 80 and 443 like a good load balancer, it's listening on high random ports.
Am I missing something? Am I doing it all wrong?
Matt,
The reason a random port is being allocated is because you are creating a service of type NodePort.
K8s documentation explains NodePort here
Based on your config, the service is exposed on port 9376 (and the backend port is 8080). So hello-word-svc should be available at: 10.109.24.16:9376. Essentially this service can be reached in one of the following means:
Service ip/port :- 10.109.24.16:9376
Node ip/port :- [your compute node ip]:31669 <-- this is created because your service is of type NodePort
You can also query the pod directly to test that the pod is in-fact exposing a service.
Pod ip/port: 10.40.0.4:8080
Since your eventual goal is to use ingress controller for external reachability to your service, "type: ClusterIP" might suffice your ask.

K8n NodePort not available to public

If the Google Container Engine cluster has the service configured as LoadBalancer it's available to the general public as expected. But if I change that to NodePort it is not available as <nodeIp>:<nodePort>.
The service (web-service.yml) looks like that:
apiVersion: v1
kind: Service
metadata:
name: web
labels:
name: web
spec:
type: NodePort
# type: LoadBalancer
ports:
- port: 80
targetPort: 8080
protocol: TCP
nodePort: 30000
selector:
name: web
I would be very happy if someone could tell me why it isn't working.
Here is some background
The cluster consists of a MongoDB deployment (db-deployment.yml) with an according service (db-service.yml) and a Jetty deployment (web-deployment.yml) with an according service (web-service.yml)
It can be found at GitHub as part of this project with the according Readme.md file.
Did you open the port (30000) in the firewall? Also make sure you use the public IP of your VM.
See this answer

Kubernetes local port for deployment in Minikube

I'm trying to expose my Deployment to a port which I can access through my local computer via Minikube.
I have tried two YAML configurations (one a load balancer, one just a service exposing a port).
I: http://pastebin.com/gL5ZBZg7
apiVersion: v1
kind: Service
metadata:
name: bot
labels:
app: bot
spec:
type: LoadBalancer
ports:
- port: 8000
targetPort: 8000
protocol: TCP
selector:
app: bot
II: http://pastebin.com/sSuyhzC5
apiVersion: v1
kind: Service
metadata:
name: bot
labels:
app: bot
spec:
ports:
- port: 8000
targetPort: 8000
protocol: TCP
selector:
app: bot
The deployment and the docker container image both expose port 8000, and the Pod is tagged with app:bot.
The first results in a service with a port which never finishes, and the external IP never gets assigned.
The second results in a port of bot:8000 TCP, bot:0 TCP in my dashboard and when I try "minikube service bot" nothing happens. The same happens if I type in "kubectl expose service bot".
I am on Mac OS X.
How can I set this up properly?
The LoadBalancer service is meant for Cloud providers and not really relevant for minikube.
From the documentation:
On cloud providers which support external load balancers, setting the type field to "LoadBalancer" will provision a load balancer for your Service.
Using a Service of type NodePort (see documentation) as mentioned in the Networking part of the minikube documentation is the intended way of exposing services on minikube.
So your configuration should look like this:
apiVersion: v1
kind: Service
metadata:
name: bot
labels:
app: bot
spec:
type: NodePort
ports:
- port: 8000
targetPort: 8000
nodePort: 30356
protocol: TCP
selector:
app: bot
And access your application through:
> IP=$(minikube ip)
> curl "http://$IP:30356"
Hope that helps.
Minikube now has the service command to access a service.
Use kubectl service <myservice>.
That will give you a URL which you can use to talk to the service.