Kubernetes local cluster Pod hostPort - application not accessible - kubernetes

I am trying to access a web api deployed into my local Kubernetes cluster running on my laptop (Docker -> Settings -> Enable Kubernetes). The below is my Pod Spec YAML.
kind: Pod
apiVersion: v1
metadata:
name: test-api
labels:
app: test-api
spec:
containers:
- name: testapicontainer
image: myprivaterepo/testapi:latest
ports:
- name: web
hostPort: 55555
containerPort: 80
protocol: TCP
kubectl get pods shows the test-api running. However, when I try to connect to it using http://localhost:55555/testapi/index from my laptop, I do not get a response. But, I can access the application from a container in a different pod within the cluster (I did a kubectl exec -it to a different container), using the URL
http://test-api pod cluster IP/testapi/index
. Why cannot I access the application using the localhost:hostport URL?

I'd say that this is strongly not recommended.
According to k8s docs: https://kubernetes.io/docs/concepts/configuration/overview/#services
Don't specify a hostPort for a Pod unless it is absolutely necessary. When you bind a Pod to a hostPort, it limits the number of places the Pod can be scheduled, because each <hostIP, hostPort, protocol> combination must be unique. If you don't specify the hostIP and protocol explicitly, Kubernetes will use 0.0.0.0 as the default hostIP and TCP as the default protocol.
If you only need access to the port for debugging purposes, you can use the apiserver proxy or kubectl port-forward.
If you explicitly need to expose a Pod's port on the node, consider using a NodePort Service before resorting to hostPort.
So... Is the hostPort really necessary on your case? Or a NodePort Service would solve it?
If it is really necessary , then you could try using the IP that is returning from the command:
kubectl get nodes -o wide
http://ip-from-the-command:55555/testapi/index
Also, another test that may help your troubleshoot is checking if your app is accessible on the Pod IP.
UPDATE
I've done some tests locally and understood better what the documentation is trying to explain. Let me go through my test:
First I've created a Pod with hostPort: 55555, I've done that with a simple nginx.
Then I've listed my Pods and saw that this one was running on one of my specific Nodes.
Afterwards I've tried to access the Pod in the port 55555 through my master node IP and other node IP without success, but when trying to access through the Node IP where this Pod was actually running, it worked.
So, the "issue" (and actually that's why this approach is not recommended), is that the Pod is accessible only through that specific Node IP. If it restarts and start in a different Node, the IP will also change.

Related

ignite CommunicationSpi questions in PAAS environment

My environment is that the ignite client is on kubernetes and the ignite server is running on a normal server.
In such an environment, TCP connections are not allowed from the server to the client.
For this reason, CommunicationSpi(server -> client) cannot be allowed.
What I'm curious about is what issues can occur in situations where Communication Spi is not available?
In this environment, Is there a way to make a CommunicationSpi(server -> client) connection?
In Kubernetes, the service is used to communicate with pods.
The default service type in Kubernetes is ClusterIP
ClusterIP is an internal IP address reachable from inside of the Kubernetes cluster only. The ClusterIP enables the applications running within the pods to access the service.
To expose the pods outside the kubernetes cluster, you will need k8s service of NodePort or LoadBalancer type.
NodePort: Exposes the Service on each Node’s IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You’ll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort> .
Please note that it is needed to have external IP address assigned to one of the nodes in cluster and a Firewall rule that allows ingress traffic to that port. As a result kubeproxy on Kubernetes node (the external IP address is attached to) will proxy that port to the pods selected by the service.
LoadBalancer: Exposes the Service externally using a cloud provider’s load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
Alternatively it is possible to use Ingress
There is a very good article on acessing Kubernetes Pods from Outside of cluster .
Hope that helps.
Edited on 09-Dec-2019
upon your comment I recall that it's possible to use hostNetwork and hostPort methods.
hostNetwork
The hostNetwork setting applies to the Kubernetes pods. When a pod is configured with hostNetwork: true, the applications running in such a pod can directly see the network interfaces of the host machine where the pod was started. An application that is configured to listen on all network interfaces will in turn be accessible on all network interfaces of the host machine.
Example:
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
hostNetwork: true
containers:
- name: nginx
image: nginx
You can check that the application is running with: curl -v http://kubenode01.example.com
Note that every time the pod is restarted Kubernetes can reschedule the pod onto a different node and so the application will change its IP address. Besides that two applications requiring the same port cannot run on the same node. This can lead to port conflicts when the number of applications running on the cluster grows.
What is the host networking good for? For cases where a direct access to the host networking is required.
hostPort
The hostPort setting applies to the Kubernetes containers. The container port will be exposed to the external network at :, where the hostIP is the IP address of the Kubernetes node where the container is running and the hostPort is the port requested by the user.
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 8086
hostPort: 443
The hostPort feature allows to expose a single container port on the host IP. Using the hostPort to expose an application to the outside of the Kubernetes cluster has the same drawbacks as the hostNetwork approach discussed in the previous section. The host IP can change when the container is restarted, two containers using the same hostPort cannot be scheduled on the same node.
What is the hostPort used for? For example, the nginx based Ingress controller is deployed as a set of containers running on top of Kubernetes. These containers are configured to use hostPorts 80 and 443 to allow the inbound traffic on these ports from the outside of the Kubernetes cluster.
To support such a deployment configuration you would need to dance a lot around a network configuration - setting up K8 Services, Ignite AddressResolver, etc. The Ignite community is already aware of this inconvenience and working on an out-of-the-box solution.
Updated
If you run Ignite thick clients in a K8 environment and the servers are on VMs, then you need to enable the TcpCommunicationSpi.forceClientToServerConnections mode to avoid connectivity issues.
If you run Ignite thin clients then configure just provide IPs of servers as described here.

Is it possible for outside traffic to access my deployment inside Minikube?

Kubernetes newbie here. Just want to get my fundamental understanding correct. Minikube is known for local development and is it possible for connection outside (not just outside cluster) to access the pods I have deployed in minikube?
I am running my minikube in ec2 instance so I started my minikube with command minikube start --vm-driver=none, which means running minikube with Docker, no VM provisioned. My end goal is to allow connection outside to reach my pods inside the cluster and perform POST request through the pod (for example using Postman).
If yes, I also have my service resource applied using kubectl apply
-f into my minikube using NodePort in yaml file. Also, I also wish to understand port, nodePort, and targetPort correctly. port is
the port number assigned to that particular service, nodePort is the
port number on the node (in my case is my ec2 instance private IP),
targetPort is the port number equivalent to the containerPort I've
assigned in yaml of my deployment. Correct me if I am wrong in this statement.
Thanks.
Yes you can do that
as you have started the minikube with :
minikube start --vm-driver=none
nodePort is the port that a client outside of the cluster will "see". nodePort is opened on every node in your cluster via kube-proxy. You can use nodePort to access the application from outside world. Like https://loadbalancerIP:NodePort
port is the port your service listens on inside the cluster. Let's take this example:
apiVersion: v1
kind: Service
metadata:
name: test-service
spec:
ports:
- port: 8080
targetPort: 8070
nodePort: 31222
protocol: TCP
selector:
component: test-service-app
From inside k8s cluster this service will be reachable via http://test-service.default.svc.cluster.local:8080 (service to service communication inside your cluster) and any request reaching there is forwarded to a running pod on targetPort 8070.
tagetPort is also by default the same value as port if not specified otherwise.

Kubernetes Service not being assigned an (external) IP address

There are various answers for very similar questions around SO that all show what I expect my deployment to look like, however mine does not.
I am running Minikube 0.25, with Kubernetes 1.9 on Windows
10.
I have successfully created a node, a replication controller, and a
single pod template has been replicated 10 times.
The node is Minikube, and is assigned the IP address 10.49.106.251
The dashboard is available at 10.49.106.251:30000
I am deploying a service with a YAML file, but the service is never assigned an external IP - the result is the same if I happen to use kubectl expose.
The YAML file that I am using:
kind: Service
apiVersion: v1
metadata:
name: hello-service
spec:
type: NodePort
selector:
app: hello-world
ports:
- protocol: TCP
port: 8080
I can also use the YAML file to assign an external IP - I assign it the same value as the node IP address. Either way results in no possible connection to the service. I should also point out that the 10 replicated pods all match the selector.
The result of running kubectl get svc for the default, and after updating the external IP are below:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-service NodePort 10.108.61.233 <none> 8080:32406/TCP 1m
hello-service NodePort 10.108.61.233 10.49.106.251 8080:32406/TCP 1m
The tutorial I have been following, and the other answers on SO show a result similar to:
hello-service NodePort 10.108.61.233 <nodes> 8080:32406/TCP 1m
Where the difference is that the external IP is set to <nodes>
I have encountered a number of issues when running locally - is this just another case of doing so, or has someone else identified a way to get around the external IP assignment issue?
For local development purpose, I have also met with the problem of exposing a 'public IP' for my local development cluster.
Fortunately, I have found one of the kubectl command which can help:
kubectl port-forward service/service-name 9092
Where 9092 is the container port to expose, so that I can access applications inside the cluster, on my local development environment.
The important note is that it is not a 'production' grade solution.
Works well as a temporary hack to get to the cluster insides.
Using NodePort means it will open a port on all nodes of your cluster. In your example above, the port exposed to the outside world is 32406.
In order to access hello-service (if it is http) it will be http://[ the node ip]:32406/. This will hit your minikube and the the request will be routed to your pod in roundrobin fashion.
same problem when trying to deploy a simple helloworld image locally with Kubernetes v1.9.2
After two weeks of attempts , It seems that Kubernetes expose all nginx web server applications internally in port 80 not 8080
So this should work kubectl expose deployment hello-service --type=NodePort --port=80

How do I externally access a service with kubernetes NodePort?

I've setup a NodePort service using the following config:
wordpress-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: wordpress
name: wordpress
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: wordpress
Is this sufficient to access the service externally, if so how can I now access the service? What details do I need - and how do I determine them - for example node IP.
For Kubernetes on GCE:
We had the same question regarding services of type NodePort: How do we access node port services from our own host?
#ivan.sim 's answer (nodeIp:nodePort) is on mark however, you still wouldn't be able to access your service unless you add a firewall ingress (inbound to google cloud) traffic rule on the VPC network console to allow your host to be able to access your compute node
the above rule is dangerous and should be used only during development
You can find the node port using either the Google Cloud console or by running subsequent kubectl commands to find out the node running your pod which has your container. i.e kubectl get pods , kubectl describe pod your-pod-name, kubectl describe node node-that-runs-you-pod .status.addresses has your ExternalIP
It would be great if we could extract the node ip running our container in the pod using only a label/selector and a few line of commands, so here is what we did, in this case our selector is app: your-label:
$ nodename=$(kubectl get pods -o jsonpath='{.items[?(#.metadata.labels.app=="your-label")].spec.nodeName}')
$ nodeIp=$(kubectl get nodes -o jsonpath='{.items[?(#.metadata.name=="'$(echo $nodename)'")].status.addresses[?(#.type=="ExternalIP")].address}')
$ echo nodeIp
notice: we used json path to extract the information we desired, for more on json path see: json path
You could certainly turn this into a script that takes a label/selector as input and outputs an external ip of the node running your container !!!
To get the nodeport just type:
$ kubectl get services
under the PORT(S) columns you will see something like tagetPort:nodePort. this nodeport is what you want .
nodeIp:nodePort
When you define a service as type NodePort, every node in your cluster will proxy that port to your service. If you nodes are reachable from outside the Kubernetes cluster, you should be able to access the service at nodeIP:nodePort.
To determine nodeIP of a particular node, you can use either kubectl get no <node> -o yaml or kubectl describe no <node>. The status.Addresses field will be of interest. Generally, you will see fields like HostName, ExternalIP and InternalIP there.
To determine nodePort of your service, you can use either kubectl get svc wordpress -o yaml or kubectl describe svc wordpress. The spec.ports.nodePort is the port you need.
Service defined like this got assgned a high port number and is exposed on all your cluster nodes on that port (probably something like 3xxxx). Hard to tell the rest without proper knowledge of how your cluster is provisioned. kubectl get nodes should give you some knowledge about your nodes.
Although I assume you want to expose the service to the outside world. In the long run I suggest getting familiar with LoadBalancer type services and Ingress / IngressController

How to assign a static IP to a pod using Kubernetes

currently kubectl assigns the IP address to a pod and that is shared within the pod by all the containers.
I am trying to assign a static IP address to a pod i.e in the same network range as the one assigned by kubectl, I am using the following deployment file
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: redis
spec:
replicas: 1
template:
metadata:
labels:
run: rediscont
spec:
containers:
- name: redisbase
image: localhost:5000/demo/redis
ports:
- containerPort: 6379
hostIP: 172.17.0.1
hostPort: 6379
On the dockerhost where its deployed i see the following:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4106d81a2310 localhost:5000/demo/redis "/bin/bash -c '/root/" 28 seconds ago Up 27 seconds k8s_redisbase.801f07f1_redis-1139130139-jotvn_default_f1776984-d6fc-11e6-807d-645106058993_1b598062
71b03cf0bb7a gcr.io/google_containers/pause:2.0 "/pause" 28 seconds ago Up 28 seconds 172.17.0.1:6379->6379/tcp k8s_POD.99e70374_redis-1139130139-jotvn_default_f1776984-d6fc-11e6-807d-645106058993_8c381981
The IP tables-save gives the following output
-A DOCKER -d 172.17.0.1/32 ! -i docker0 -p tcp -m tcp --dport 6379 -j DNAT --to-destination 172.17.0.3:6379
Even with this, from other pods the IP 172.17.0.1 is not accessible.
Basically the question is how to assign static IP to a pod so that 172.17.0.3 doesn't get assigned to it
Generally, assigning a Pod a static IP address is an anti-pattern in Kubernetes environments. There are a couple of approaches you may want to explore instead. Using a Service to front-end your Pods (or to front-end even just a single Pod) will give you a stable network identity, and allow you to horizontally scale your workload (if the workload supports it). Alternately, using a StatefulSet may be more appropriate for some workloads, as it will preserve startup order, host name, PersistentVolumes, etc., across Pod restarts.
I know this doesn't necessarily directly answer your question, but hopefully it provides some additional options or information that proves useful.
Assigning static IP addresses to PODs is not possible in OSS Kubernetes. But it is possible to configure via some CNI plugins. For instance, Calico provides a way to override IPAM and use fixed addresses by annotating pod. The address must be within a configured Calico IP pool and not currently in use.
https://docs.projectcalico.org/networking/use-specific-ip
When you created Deployment with one replica and defined hostIP and hostPort
you basically bounded hostIP and hostPort of your host machine with your pod IP and container port, so that traffic is routed from hostIP: port to podIP: port.
Created pod (and container inside of it ) was assigned the ip address from the IP range that is available to it. Basically, the IP range depends on the CNI networking plugin used and how it allocates IP range to each node. For instance flannel, by default, provides a /24 subnet to hosts, from which Docker daemon allocates IPs to containers. So hostIP: 172.17.0.1 option in a spec has nothing to do with assigning IP address to a pod.
Basically, the question is how to assign static IP to a pod so that
172.17.0.3 doesn't get assigned to it
As far as I know, all major networking plugins, provide a range of IPs to hosts, so that a pod's IP will be assigned from that range.
You can explore different networking plugins and look at how each of them deals with IPAM(IP Address Management), maybe some plugin provides that functionality or offers some tweaks to implement that, but overall its usefulness would be quite limited.
Below is useful info on "hostIP, hostPort" from official K8 docs:
Don’t specify a hostPort for a Pod unless it is absolutely necessary.
When you bind a Pod to a hostPort, it limits the number of places the
Pod can be scheduled, because each
combination must be unique. If you don’t specify the hostIP and
protocol explicitly, Kubernetes will use 0.0.0.0 as the default hostIP
and TCP as the default protocol.
If you only need access to the port
for debugging purposes, you can use the apiserver proxy or kubectl
port-forward.
If you explicitly need to expose a Pod’s port on the
node, consider using a NodePort Service before resorting to hostPort.
Avoid using hostNetwork, for the same reasons as hostPort.
Orignal info link to config best practices.