K8n NodePort not available to public - kubernetes

If the Google Container Engine cluster has the service configured as LoadBalancer it's available to the general public as expected. But if I change that to NodePort it is not available as <nodeIp>:<nodePort>.
The service (web-service.yml) looks like that:
apiVersion: v1
kind: Service
metadata:
name: web
labels:
name: web
spec:
type: NodePort
# type: LoadBalancer
ports:
- port: 80
targetPort: 8080
protocol: TCP
nodePort: 30000
selector:
name: web
I would be very happy if someone could tell me why it isn't working.
Here is some background
The cluster consists of a MongoDB deployment (db-deployment.yml) with an according service (db-service.yml) and a Jetty deployment (web-deployment.yml) with an according service (web-service.yml)
It can be found at GitHub as part of this project with the according Readme.md file.

Did you open the port (30000) in the firewall? Also make sure you use the public IP of your VM.
See this answer

Related

NodePort is not working on minikube on virtual box

I am learning Kubernetes and have a simple deployment and a nodePort service. I am not able to access my deployment using nodePort. I tried hyperkit, docker and virtualbox .
Context
My java application is running on 8080 port ( tomcat server )
My service port is 8080.
My nodePort is 32000.
Here is the service file
apiVersion: v1
kind: Service
metadata:
name: file-process-service
labels:
app: file-process-service
spec:
type: NodePort
ports:
- name: http
port: 8080 ---------> Service
targetPort: 8080 ----> Tomcat Port
nodePort: 32000 ----> NodePort
protocol: TCP
selector:
app: file-process-app
Minukube URl is
minikube service file-process-service --url
>> http://192.168.59.100:32000
Now, when I am trying to access it via postman, I am getting connection refused. Can any one help me where I am doing it wrong or how can I debug it further?
Thanks DavidMaze - I am attaching the end points. It is None for my service
It was hinted by #DavidMaze - I found the service is created however end points were not created. I further debug the issue and found that in the service I have mentioned the wrong pods selectors, hence no ep was created.

Routing traffic from GCP VM to VPC Native Cloud DNS GKE Cluster

I'm trying to achieve the following scenario:
My VM should be able to connect over a ClusterIP Service to the Pods behind it. The new beta feature here looks like this should be possible, but maybe I'm doing something wrong or misunderstood something...
The full documentation is here:
https://cloud.google.com/kubernetes-engine/docs/how-to/cloud-dns?hl=de#vpc_scope_dns
The DNS is working, I get a service IP, a route to the default network is available.
I can connect via pod IP. But it seems the service IP is not routable from outside the cluster. I know, that normally a ClusterIP is not available from outside the cluster. But then, I don't understand why this feature exists and it also does not match with the diagram provided in the docs. As this feature seems to provide cross-cluster/VM communication via services.
apiVersion: v1
kind: Service
metadata:
name: my-cip-service
namespace: default
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
run: load-balancer-example
sessionAffinity: None
type: ClusterIP
Do I understand the feature wrong or am I missing a configuration?
Traffic Check:
This only works with headless Kubernetes services. So you'll need to modify your service spec to included clusterIP: None:
apiVersion: v1
kind: Service
metadata:
name: my-cip-service
namespace: default
spec:
clusterIP: None
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
run: load-balancer-example
sessionAffinity: None
type: ClusterIP

Swagger Host Port Missing in Kubernetes

spring app + swagger (works on 8080) are working successfully in my local, but when i deploy app on kubernetes, the port information is missing and it calls host without port. so request fails.
in local, these informations are shown in UI like below
[ Base URL: localhost:8080/customerservice/api ]
http://localhost:8080/customerservice/api/v2/api-docs
in kubernetes deployed,
[ Base URL: kubernetes-ip/customerservice/api ]
http://kubernetes-ip:32004/customerservice/api/v2/api-docs
but base url must be kubernetes-ip:32004/customerservice/api
i have created kubernetes service (nodeport) to access deployment and 32004 is service port.
kind: Service
apiVersion: v1
metadata:
name: customer-service
namespace: altyapi
spec:
ports:
- protocol: TCP
port: 80
targetPort: 8080
nodePort: 32004
selector:
app: customer
type: NodePort
where am i missing here? thanks for helps and suggestions.

Kubernetes: Multiple containers that have to communicate + exposed nodePort

In my setup, there is a set of containers that were initially built to run with docker-compose. After moving to Kubernetes I'm facing the following challenges:
docker-compose managed to provide some type of internal DNS that allowed a container to be addressed by its name. How do I create such a network in Kubernetes? What I found so far:
they could all be part of one pod and therefore communicate via localhost
they could all have a common label with matching key:value pairs and a service, but how does one handle Ports?
I need to expose an internal Port to a certain NodePort as it has to be publicly available. How does such a service config look like? What I found so far:
something like this:
apiVersion: v1
kind: Service
metadata:
labels:
app: frontend
name: frontend-nodeport
spec:
type: NodePort
ports:
- name: "3000-30001"
port: 3000
nodePort: 30001
selector:
app: frontend
status:
loadBalancer: {}`
Docker-compose managed to provide some type of internal DNS that allowed a container to be addressed by its name. How do I create such a network in Kubernetes?
As you researched, you can, indeed have two approaches:
IF you containers are to be scaled together then place them inside same pod and communicate through localhost over separate ports. This is less likely your case since this approach is more suitable when containerized app is more similar to processes on one physical box than a separate service/server.
IF your containers are to be scaled separaltey, which is more probably your case, then use service. With services, in place of localhost (in previous point) you will either use just service name as it is (if pods are in same namespace) or FQDN (servicename.namespace.svc.cluster.local) if services are accessed across namespaces. As opposed to previous point where you had to have different ports for your containers (since you address localhost), in this case you can have same port across multiple services, since service:port must be unique. Also with service you can remap ports from containers if you wish to do so as well.
Since you asked this as an introductory question two words of caution:
service resolution works from standpoint of pod/container. To test it you actually need to exec into actual container (or proxy from host) and this is common confusion point. Just to be on safe side test service:port accessibility within actual container, not from master.
Finally, just to mimic docker-compose setup for inter-container network, you don't need to expose NodePort or whatever. Service layer in kubernetes will take care of DNS handling. NodePort has different intention.
I need to expose an internal Port to a certain NodePort. How does such a service config look like?
You are on a good track, here is nice overview to get you started, and reference relevant to your question is given below:
apiVersion: v1
kind: Service
metadata:
name: my-nodeport-service
selector:
app: my-app
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
nodePort: 30036
protocol: TCP
Edit: Could you please provide an example of how a service.yaml would look like if the containers are scaled seperately ?
First one is, say, api server, we'll call it svc-my-api, it will use pod(s) labeled app: my-api and communicate to pod's port 80 and will be accessible by other pods (in the same namespace) as host:svc-my-api and port:8080
apiVersion: v1
kind: Service
metadata:
name: svc-my-api
labels:
app: my-api
spec:
selector:
app: my-api
ports:
- protocol: TCP
port: 8080
targetPort: 80
Second one is, say, mysql server, we'll call it svc-my-database. Supposing that containers from api pods (covered by previous service) want to access database, they will use host:svc-my-database and port:3306.
apiVersion: v1
kind: Service
metadata:
name: svc-my-database
labels:
app: my-database
spec:
selector:
app: my-database
ports:
- name: http
protocol: TCP
port: 3306
targetPort: 3306
1.- You can add some parameters to your pod resource (or any other that is going to create a pod), as follows:
...
spec:
hostname: foo-{1..4} #keep in mind this line
subdomain: bar #and this line
containers:
- image: busybox
...
Note: so imagine you just created 4 pods, with hostname foo-1, foo-2, foo-3 and foo-4. These are separate pods. You can't do foo-{1..4}. So this is just for demo purposes.
If you now create a service with the same name as the subdomain, you would be able to reach the pod from anywhere in the cluster by hostname.service-name.namespace.svc.cluster.local.
Example:
apiVersion: v1
kind: Service
metadata:
name: bar #my subdomain is called "bar", so is this service
spec:
selector:
app: my-app
ports:
- name: foo
port: 1234
targetPort: 1234
Now, say I have the label app: my-app in my pods, so the service is targeting them correctly.
At this point, look what happens (from any pod, within the cluster):
/ # nslookup foo-1.bar.my-namespace.svc.cluster.local
Server: 10.63.240.10
Address 1: 10.63.240.10 kube-dns.kube-system.svc.cluster.local
Name: foo-1.bar.my-namespace.svc.cluster.local
Address 1: 10.60.1.24 foo-1.bar.my-namespace.svc.cluster.local
2.- The second part of your question is almost correct. This is a NodePort service:
apiVersion: v1
kind: Service
metadata:
name: svc-nodeport
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: my-app
type: NodePort
This service runs on port 80, so it is reachable on port 80 from within the cluster. It will map the port to a random port over 30000 on the node. Now this same service is available on port 30001 (for example) of the node from outside world. Finally it will forward the requests to the port 8080 of the container.

Kubernetes local port for deployment in Minikube

I'm trying to expose my Deployment to a port which I can access through my local computer via Minikube.
I have tried two YAML configurations (one a load balancer, one just a service exposing a port).
I: http://pastebin.com/gL5ZBZg7
apiVersion: v1
kind: Service
metadata:
name: bot
labels:
app: bot
spec:
type: LoadBalancer
ports:
- port: 8000
targetPort: 8000
protocol: TCP
selector:
app: bot
II: http://pastebin.com/sSuyhzC5
apiVersion: v1
kind: Service
metadata:
name: bot
labels:
app: bot
spec:
ports:
- port: 8000
targetPort: 8000
protocol: TCP
selector:
app: bot
The deployment and the docker container image both expose port 8000, and the Pod is tagged with app:bot.
The first results in a service with a port which never finishes, and the external IP never gets assigned.
The second results in a port of bot:8000 TCP, bot:0 TCP in my dashboard and when I try "minikube service bot" nothing happens. The same happens if I type in "kubectl expose service bot".
I am on Mac OS X.
How can I set this up properly?
The LoadBalancer service is meant for Cloud providers and not really relevant for minikube.
From the documentation:
On cloud providers which support external load balancers, setting the type field to "LoadBalancer" will provision a load balancer for your Service.
Using a Service of type NodePort (see documentation) as mentioned in the Networking part of the minikube documentation is the intended way of exposing services on minikube.
So your configuration should look like this:
apiVersion: v1
kind: Service
metadata:
name: bot
labels:
app: bot
spec:
type: NodePort
ports:
- port: 8000
targetPort: 8000
nodePort: 30356
protocol: TCP
selector:
app: bot
And access your application through:
> IP=$(minikube ip)
> curl "http://$IP:30356"
Hope that helps.
Minikube now has the service command to access a service.
Use kubectl service <myservice>.
That will give you a URL which you can use to talk to the service.