currently kubectl assigns the IP address to a pod and that is shared within the pod by all the containers.
I am trying to assign a static IP address to a pod i.e in the same network range as the one assigned by kubectl, I am using the following deployment file
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: redis
spec:
replicas: 1
template:
metadata:
labels:
run: rediscont
spec:
containers:
- name: redisbase
image: localhost:5000/demo/redis
ports:
- containerPort: 6379
hostIP: 172.17.0.1
hostPort: 6379
On the dockerhost where its deployed i see the following:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4106d81a2310 localhost:5000/demo/redis "/bin/bash -c '/root/" 28 seconds ago Up 27 seconds k8s_redisbase.801f07f1_redis-1139130139-jotvn_default_f1776984-d6fc-11e6-807d-645106058993_1b598062
71b03cf0bb7a gcr.io/google_containers/pause:2.0 "/pause" 28 seconds ago Up 28 seconds 172.17.0.1:6379->6379/tcp k8s_POD.99e70374_redis-1139130139-jotvn_default_f1776984-d6fc-11e6-807d-645106058993_8c381981
The IP tables-save gives the following output
-A DOCKER -d 172.17.0.1/32 ! -i docker0 -p tcp -m tcp --dport 6379 -j DNAT --to-destination 172.17.0.3:6379
Even with this, from other pods the IP 172.17.0.1 is not accessible.
Basically the question is how to assign static IP to a pod so that 172.17.0.3 doesn't get assigned to it
Generally, assigning a Pod a static IP address is an anti-pattern in Kubernetes environments. There are a couple of approaches you may want to explore instead. Using a Service to front-end your Pods (or to front-end even just a single Pod) will give you a stable network identity, and allow you to horizontally scale your workload (if the workload supports it). Alternately, using a StatefulSet may be more appropriate for some workloads, as it will preserve startup order, host name, PersistentVolumes, etc., across Pod restarts.
I know this doesn't necessarily directly answer your question, but hopefully it provides some additional options or information that proves useful.
Assigning static IP addresses to PODs is not possible in OSS Kubernetes. But it is possible to configure via some CNI plugins. For instance, Calico provides a way to override IPAM and use fixed addresses by annotating pod. The address must be within a configured Calico IP pool and not currently in use.
https://docs.projectcalico.org/networking/use-specific-ip
When you created Deployment with one replica and defined hostIP and hostPort
you basically bounded hostIP and hostPort of your host machine with your pod IP and container port, so that traffic is routed from hostIP: port to podIP: port.
Created pod (and container inside of it ) was assigned the ip address from the IP range that is available to it. Basically, the IP range depends on the CNI networking plugin used and how it allocates IP range to each node. For instance flannel, by default, provides a /24 subnet to hosts, from which Docker daemon allocates IPs to containers. So hostIP: 172.17.0.1 option in a spec has nothing to do with assigning IP address to a pod.
Basically, the question is how to assign static IP to a pod so that
172.17.0.3 doesn't get assigned to it
As far as I know, all major networking plugins, provide a range of IPs to hosts, so that a pod's IP will be assigned from that range.
You can explore different networking plugins and look at how each of them deals with IPAM(IP Address Management), maybe some plugin provides that functionality or offers some tweaks to implement that, but overall its usefulness would be quite limited.
Below is useful info on "hostIP, hostPort" from official K8 docs:
Don’t specify a hostPort for a Pod unless it is absolutely necessary.
When you bind a Pod to a hostPort, it limits the number of places the
Pod can be scheduled, because each
combination must be unique. If you don’t specify the hostIP and
protocol explicitly, Kubernetes will use 0.0.0.0 as the default hostIP
and TCP as the default protocol.
If you only need access to the port
for debugging purposes, you can use the apiserver proxy or kubectl
port-forward.
If you explicitly need to expose a Pod’s port on the
node, consider using a NodePort Service before resorting to hostPort.
Avoid using hostNetwork, for the same reasons as hostPort.
Orignal info link to config best practices.
Related
I am currently running a Kubernetes cluster on my own homeserver (in proxmox ct's, was kinda difficult to get working because I am using zfs too, but it runs now), and the setup is as follows:
lb01: haproxy & keepalived
lb02: haproxy & keepalived
etcd01: etcd node 1
etcd02: etcd node 2
etcd03: etcd node 3
master-01: k3s in server mode with a taint for not accepting any jobs
master-02: same as above, just joining with the token from master-01
master-03: same as master-02
worker-01 - worker-03: k3s agents
If I understand it correctly k3s delivers with flannel as a CNI pre-installed, as well as traefik as a Ingress Controller.
I've setup rancher on my cluster as well as longhorn, the volumes are just zfs volumes mounted inside the agents tho, and as they aren't on different hdd's I've set the replicas to 1. I have a friend running the same setup (we set them up together, just yesterday) and we are planing on joining our networks trough vpn tunnels and then providing storage nodes for each other as an offsite backup.
So far I've hopefully got everything correct.
Now to my question: I've both got a static ip #home as well as a domain, and I've set that domain to my static ip
Something like that: (don't know how dns entries are actually written, just from the top of my head for your reference, the entries are working well.)
A example.com. [[my-ip]]
CNAME *.example.com. example.com
I've currently made a port-forward to one of my master nodes for port 80 & 443 but I am not quite sure how you would actually configure that with ha in mind, and my rancher is throwing a 503 after visiting global settings, but I have not changed anything.
So now my question: How would one actually configure the port-forward and, as far as I know k3s has a load-balancer pre-installed, but how would one configure those port-forwards for ha? the one master node it's pointing to could, theoretically, just stop working and then all services would not be reachable anymore from outside.
Assuming your apps are running on port 80 and port 443 your ingress should give you a service with an external ip and you would point your dns at that. Read below for more info.
Seems like you are not a noob! you got a lot going on with your cluster setup. What you are asking is a bit complicated to answer and I will have to make some assumptions about your setup, but will do my best to give you at least some intial info.
This tutorial has a ton of great info and may help you with what you are doing. They use kubeadm instead of k3s, buy you can skip that section if you want and still use k3s.
https://www.debontonline.com/p/kubernetes.html
If you are setting up and installing etcd on your own, you don't need to do that k3s will create an etcd cluster for you that run inside pods on your cluster.
Load Balancing your master nodes
haproxy + keepalived nodes would be configured to point to the ips of your master nodes at port 6443 (TCP), the keepalived will give you a virtual ip and you would configure your kubeconfig (that you get from k3s) to talk to that ip. On your router you will want to reserve an ip (make sure not to assign that to any computers).
This is a good video that explains how to do it with a nodejs server but concepts are the same for your master nodes:
https://www.youtube.com/watch?v=NizRDkTvxZo
Load Balancing your applications running in the cluster
Use an K8s Service read more about it here: https://kubernetes.io/docs/concepts/services-networking/service/
essentially you need an external ip, I prefer to do this with metal lb.
metal lb gives you a service of type load balancer with an external ip
add this flag to k3s when creating initial master node:
https://metallb.universe.tf/configuration/k3s/
configure metallb
https://metallb.universe.tf/configuration/#layer-2-configuration
You will want to reserve more ips on your router and put them under the addresses section in the yaml below. In this example you will see you have 11 ips in the range 192.168.1.240 to 192.168.1.250
create this as a file example metallb-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.1.240-192.168.1.250
kubectl apply -f metallb-cm.yaml
Install with these yaml files:
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml
source - https://metallb.universe.tf/installation/#installation-by-manifest
ingress
Will need a service of type load balancer, use its external ip as the external ip
kubectl get service -A - look for your ingress service and see if it has an external ip and does not say pending
I will do my best to answer any of your follow up questions. Good Luck!
In the past I've tried NodePort service and if I add a firewall rule to the corresponding Node, it works like a charm:
type: NodePort
ports:
- nodePort: 30000
port: 80
targetPort: 5000
I can access my service from outside and long as the node has an external IP(which it does by default in GKE).
However, the service can only be assigned to 30000+ range ports, which is not very convenient.
By the way, the Service looks as follows:
kubectl get service -o=wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
web-engine-service NodePort 10.43.244.110 <none> 80:30000/TCP 11m app=web-engine-pod
Recently, I've come across a different configuration option that is documented here.
I've tried is as it seems quite promising and should allow to expose my service on any port I want.
The configuration is as follows:
ports:
- name: web-port
port: 80
targetPort: 5000
externalIPs:
- 35.198.163.215
After the service updated, I can see that External IP is indeed assigned to it:
$ kubectl get service -o=wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
web-engine-service ClusterIP 10.43.244.110 35.198.163.215 80/TCP 19m app=web-engine-pod
(where 35.198.163.215 - Node's external IP in GKE)
And yet, my app is not available on the Node's IP, unlike in the first scenario(I did add firewall rules for all ports I'm working with including 80, 5000, 30000).
What's the point of externalIPs configuration then? What does it actually do?
Note: I'm creating a demo project, so please don't tell me about LoabBalancer type, I'm well aware of that and will get to that a bit later.
In the API documentation, externalIPs is documented as (emphasis mine):
externalIPs is a list of IP addresses for which nodes in the cluster will also accept traffic for this service. These IPs are not managed by Kubernetes. The user is responsible for ensuring that traffic arrives at a node with this IP. A common example is external load-balancers that are not part of the Kubernetes system.
So you can put any IP address you want there, and it will show up in kubectl get service output, but it doesn't mean the cluster will actually accept traffic there.
To accept inbound traffic from outside the cluster, you need a minimum of a NodePort service; in a cloud environment a LoadBalancer service or an Ingress is a more common setup. You can't really short-cut around these. Conversely, a LoadBalancer isn't especially advanced or difficult, just change type: LoadBalancer in the configuration you already show and GKE will create the endpoint for you. The GKE documentation has a more complete example.
("Inside the cluster" and "outside the cluster" are different networks, and like other NAT setups pods can generally make outbound calls but you need specific setup to accept inbound calls. That's what a NodePort service does, and in the standard setup a LoadBalancer service builds on top of that.)
I wanted to give you more insight on:
How you can manage to make it work.
Why it's not working in your example.
More information about exposing traffic on GKE.
How you can manage to make it work?
You will need to enter internal IP of your node/nodes to the service definition where externalIP resides.
Example:
apiVersion: v1
kind: Service
metadata:
name: hello-external
spec:
selector:
app: hello
version: 2.0.0
ports:
- name: http
protocol: TCP
port: 80 # port to send the traffic to
targetPort: 50001 # port that pod responds to
externalIPs:
- 10.156.0.47
- 10.156.0.48
- 10.156.0.49
Why it's not working in your example?
I've prepared an example to show you why it doesn't work.
Assuming that you have:
VM in GCP with:
any operating system that allows to run tcpdump
internal IP of: 10.156.0.51
external IP of: 35.246.207.189
allowed the traffic to enter on port: 1111 to this VM
You can run below command (on VM) to capture the traffic coming to the port: 1111:
$ tcpdump port 1111 -nnvvS
-nnvvS - don't resolve DNS or Port names, be more verbose when printing info, print the absolute sequence numbers
You will need to send a request to external IP: 35.246.207.189 of your VM with a port of: 1111
$ curl 35.246.207.189:1111
You will get a connection refused message but the packet will be captured. You will get an output similar to this:
tcpdump: listening on ens4, link-type EN10MB (Ethernet), capture size 262144 bytes
12:04:25.704299 IP OMMITED
YOUR_IP > 10.156.0.51.1111: Flags [S], cksum 0xd9a8 (correct), seq 585328262, win 65535, options [mss 1460,nop,wscale 6,nop,nop,TS val 1282380791 ecr 0,sackOK,eol], length 0
12:04:25.704337 IP OMMITED
10.156.0.51.1111 > YOUR_IP: Flags [R.], cksum 0x32e3 (correct), seq 0, ack 585328263, win 0, length 0
By that example you can see the destination IP address for your packet coming to the VM. As shown above it's the internal IP of your VM and not external. That's why putting external IP in your YAML definition is not working.
This example also works on GKE. For simplicity purposes you can create a GKE cluster with Ubuntu as base image and do the same as shown above.
You can read more about IP addresses by following link below:
Cloud.google.com: VPC: Docs: IP addresses
More about exposing traffic on GKE
What's the point of externalIPs configuration then? What does it actually do?
In simple terms it will allow the traffic to enter your cluster. Request sent to your cluster will need to have destination IP the same as in the externalIP parameter in your service definition to be routed to the corresponding service.
This method requires you to track the IP addresses of your nodes and could be prone to issues when the IP address of your node will not be available (nodes autoscaling for example).
I recommend you to expose your services/applications by following official GKE documentation:
Cloud.google.com: Kubernetes Engine: Docs: How to: Exposing apps
As mentioned before, LoadBalancer type of service will automatically take into consideration changes that were made to the cluster. Things like autoscaling which increase/decrease count of your nodes. With the service shown above (with externalIP) this would require manual changes.
Please let me know if you have any questions to that.
I am trying to access a web api deployed into my local Kubernetes cluster running on my laptop (Docker -> Settings -> Enable Kubernetes). The below is my Pod Spec YAML.
kind: Pod
apiVersion: v1
metadata:
name: test-api
labels:
app: test-api
spec:
containers:
- name: testapicontainer
image: myprivaterepo/testapi:latest
ports:
- name: web
hostPort: 55555
containerPort: 80
protocol: TCP
kubectl get pods shows the test-api running. However, when I try to connect to it using http://localhost:55555/testapi/index from my laptop, I do not get a response. But, I can access the application from a container in a different pod within the cluster (I did a kubectl exec -it to a different container), using the URL
http://test-api pod cluster IP/testapi/index
. Why cannot I access the application using the localhost:hostport URL?
I'd say that this is strongly not recommended.
According to k8s docs: https://kubernetes.io/docs/concepts/configuration/overview/#services
Don't specify a hostPort for a Pod unless it is absolutely necessary. When you bind a Pod to a hostPort, it limits the number of places the Pod can be scheduled, because each <hostIP, hostPort, protocol> combination must be unique. If you don't specify the hostIP and protocol explicitly, Kubernetes will use 0.0.0.0 as the default hostIP and TCP as the default protocol.
If you only need access to the port for debugging purposes, you can use the apiserver proxy or kubectl port-forward.
If you explicitly need to expose a Pod's port on the node, consider using a NodePort Service before resorting to hostPort.
So... Is the hostPort really necessary on your case? Or a NodePort Service would solve it?
If it is really necessary , then you could try using the IP that is returning from the command:
kubectl get nodes -o wide
http://ip-from-the-command:55555/testapi/index
Also, another test that may help your troubleshoot is checking if your app is accessible on the Pod IP.
UPDATE
I've done some tests locally and understood better what the documentation is trying to explain. Let me go through my test:
First I've created a Pod with hostPort: 55555, I've done that with a simple nginx.
Then I've listed my Pods and saw that this one was running on one of my specific Nodes.
Afterwards I've tried to access the Pod in the port 55555 through my master node IP and other node IP without success, but when trying to access through the Node IP where this Pod was actually running, it worked.
So, the "issue" (and actually that's why this approach is not recommended), is that the Pod is accessible only through that specific Node IP. If it restarts and start in a different Node, the IP will also change.
My environment is that the ignite client is on kubernetes and the ignite server is running on a normal server.
In such an environment, TCP connections are not allowed from the server to the client.
For this reason, CommunicationSpi(server -> client) cannot be allowed.
What I'm curious about is what issues can occur in situations where Communication Spi is not available?
In this environment, Is there a way to make a CommunicationSpi(server -> client) connection?
In Kubernetes, the service is used to communicate with pods.
The default service type in Kubernetes is ClusterIP
ClusterIP is an internal IP address reachable from inside of the Kubernetes cluster only. The ClusterIP enables the applications running within the pods to access the service.
To expose the pods outside the kubernetes cluster, you will need k8s service of NodePort or LoadBalancer type.
NodePort: Exposes the Service on each Node’s IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You’ll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort> .
Please note that it is needed to have external IP address assigned to one of the nodes in cluster and a Firewall rule that allows ingress traffic to that port. As a result kubeproxy on Kubernetes node (the external IP address is attached to) will proxy that port to the pods selected by the service.
LoadBalancer: Exposes the Service externally using a cloud provider’s load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
Alternatively it is possible to use Ingress
There is a very good article on acessing Kubernetes Pods from Outside of cluster .
Hope that helps.
Edited on 09-Dec-2019
upon your comment I recall that it's possible to use hostNetwork and hostPort methods.
hostNetwork
The hostNetwork setting applies to the Kubernetes pods. When a pod is configured with hostNetwork: true, the applications running in such a pod can directly see the network interfaces of the host machine where the pod was started. An application that is configured to listen on all network interfaces will in turn be accessible on all network interfaces of the host machine.
Example:
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
hostNetwork: true
containers:
- name: nginx
image: nginx
You can check that the application is running with: curl -v http://kubenode01.example.com
Note that every time the pod is restarted Kubernetes can reschedule the pod onto a different node and so the application will change its IP address. Besides that two applications requiring the same port cannot run on the same node. This can lead to port conflicts when the number of applications running on the cluster grows.
What is the host networking good for? For cases where a direct access to the host networking is required.
hostPort
The hostPort setting applies to the Kubernetes containers. The container port will be exposed to the external network at :, where the hostIP is the IP address of the Kubernetes node where the container is running and the hostPort is the port requested by the user.
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 8086
hostPort: 443
The hostPort feature allows to expose a single container port on the host IP. Using the hostPort to expose an application to the outside of the Kubernetes cluster has the same drawbacks as the hostNetwork approach discussed in the previous section. The host IP can change when the container is restarted, two containers using the same hostPort cannot be scheduled on the same node.
What is the hostPort used for? For example, the nginx based Ingress controller is deployed as a set of containers running on top of Kubernetes. These containers are configured to use hostPorts 80 and 443 to allow the inbound traffic on these ports from the outside of the Kubernetes cluster.
To support such a deployment configuration you would need to dance a lot around a network configuration - setting up K8 Services, Ignite AddressResolver, etc. The Ignite community is already aware of this inconvenience and working on an out-of-the-box solution.
Updated
If you run Ignite thick clients in a K8 environment and the servers are on VMs, then you need to enable the TcpCommunicationSpi.forceClientToServerConnections mode to avoid connectivity issues.
If you run Ignite thin clients then configure just provide IPs of servers as described here.
I have a Kubernetes cluster running Calico as the overlay and NetworkPolicy implementation configured for IP-in-IP encapsulation and I am trying to expose a simple nginx application using the following Service:
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: default
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
selector:
app: nginx
I am trying to write a NetworkPolicy that only allows connections via the load balancer. On a cluster without an overlay, this can be achieved by allowing connections from the CIDR used to allocate IPs to the worker instances themselves - this allows a connection to hit the Service's NodePort on a particular worker and be forwarded to one of the containers behind the Service via IPTables rules. However, when using Calico configured for IP-in-IP, connections made via the NodePort use Calico's IP-in-IP tunnel IP address as the source address for cross node communication, as shown by the ipv4IPIPTunnelAddr field on the Calico Node object here (I deduced this by observing the source IP of connections to the nginx application made via the load balancer). Therefore, my NetworkPolicy needs to allow such connections.
My question is how can I allow these types of connections without knowing the ipv4IPIPTunnelAddr values beforehand and without allowing connections from all Pods in the cluster (since the ipv4IPIPTunnelAddr values are drawn from the cluster's Pod CIDR range). If worker instances come up and die, the list of such IPs with surely change and I don't want my NetworkPolicy rules to depend on them.
Calico version: 3.1.1
Kubernetes version: 1.9.7
Etcd version: 3.2.17
Cloud provider: AWS
I’m afraid we don’t have a simple way to match the tunnel IPs dynamically right now. If possible, the best solution would be to move away from IPIP; once you remove that overlay, everything gets a lot simpler.
In case you’re wondering, we need to force the nodes to use the tunnel IP because, if you’re suing IPIP, we assume that your network doesn’t allow direct pod-to-node return traffic (since the network won’t be expecting the pod IP it may drop the packets)