I have a K8s cluster installed in several RHEL 7.2 VMs.
Seems that the installation form yum repository comes without addons.
Currently I am facing the following problem almost with any service I am trying to deploy: Jenkins, Kube-ui, influxdb-grafana
Endpoints IPs are not in the range that is defined for Flannel and obviously the services are not available.
Any ideas on how to debug\resolve the problem?
System details:
# lsb_release -i -r
Distributor ID: RedHatEnterpriseServer
Release: 7.2
Packages installed:
kubernetes.x86_64 1.2.0-0.9.alpha1.gitb57e8bd.el7
etcd.x86_64 2.2.5-1.el7
flannel.x86_64 0.5.3-9.el7
docker.x86_64 1.9.1-25.el7.centos
ETCD network configuration
# etcdctl get /atomic.io/network/config
{"Network":"10.0.0.0/16"}
Service gets proper IP but wrong Endpoints
# kubectl describe svc jenkinsmaster
Name: jenkinsmaster
Namespace: default
Labels: kubernetes.io/cluster-service=true,kubernetes.io/name=JenkinsMaster
Selector: name=jenkinsmaster
Type: NodePort
IP: 10.254.113.89
Port: http 8080/TCP
NodePort: http 30996/TCP
Endpoints: 172.17.0.2:8080
Port: slave 50000/TCP
NodePort: slave 31412/TCP
Endpoints: 172.17.0.2:50000
Session Affinity: None
No events.
Thank you.
I think the flannel network subnet and the kubernetes internal network subnet seems to be conflicting here.
With the amount of information as I see now all I can say is that there is a conflict here. To verify that flannel is working just start contianer in two different machines connected with flannel and see if they can talk and what IP address they get. If they are being assigned IP of range 10.0.0.0/16 and they can talk then flannel is doing good. And something is wrong with the integration with kubernetes.
If you are not getting the IP addresses of some other range flannel is not doing good.
kubernetes 1.12...docker 1.9... They are ancient version now. So you don't have CNI or kubeadm. I can barely remember how to setup a kubernetes cluster with flannel that time.
Anyway, you need to know Endpoint IP is same as target Pod IP, that is IP of docker container. So your docker container IP is not the same range as your flannel IP, and 172.17.0.x is the default docker IP range. So I think you need to change docker start parameter like --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}, you can use 10.0.0.0/16 as FLANNEL_SUBNET is you want a basic setup.
Related
I have a single node Kubernetes cluster, installed using k3s on bare metal. I also run some services on the host itself, outside the Kubernetes cluster. Currently I use the external IP address of the machine (192.168.200.4) to connect to these services from inside the Kubernetes network.
Is there a cleaner way of doing this? What I want to avoid is having to reconfigure my Kubernetes pods if I decide to change the IP address of my host.
Possible magic I which existed: a Kubernetes service or IP that automagically points to my external IP (192.168.200.4) or a DNS name that points the node's external IP address.
That's what ExternalName services are for (https://kubernetes.io/docs/concepts/services-networking/service/#externalname):
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: ExternalName
externalName: ${my-hostname}
ports:
- port: 80
Then you can access the service from withing kubernetes as my-service.${namespace}.svc.cluster.local.
See: https://livebook.manning.com/concept/kubernetes/external-service
After the service is created, pods can connect to the external service
through the external-service.default.svc.cluster.local domain name (or
even external-service) instead of using the service’s actual FQDN.
This hides the actual service name and its location from pods
consuming the service, allowing you to modify the service definition
and point it to a different service any time later, by only changing
the externalName attribute or by changing the type back to ClusterIP
and creating an Endpoints object for the service—either manually or by
specifying a label selector on the service and having it created
automatically.
ExternalName services are implemented solely at the DNS level—a simple
CNAME DNS record is created for the service. Therefore, clients
connecting to the service will connect to the external service
directly, bypassing the service proxy completely. For this reason,
these types of services don’t even get a cluster IP.
This relies on using a resolvable hostname of your machine. On minikube there's a DNS alias host.minikube.internal that is setup to resolve to an IP address that routes to your host machine, I don't know if k3s supports something similar.
Thanks #GeertPt,
With minikube's host.minikube.internal in mind I search around and found that CoreDNS has a DNS entry for each host it's running on. This only seems the case for K3S.
Checking
kubectl -n kube-system get configmap coredns -o yaml
reveals there is the following entry:
NodeHosts: |
192.168.200.4 my-hostname
So if the hostname doesn't change, I can use this instead of the IP.
Also, if you're running plain docker you can use host.docker.internal to access the host.
So to sum up:
from minikube: host.minikube.internal
from docker: host.docker.internal
from k3s: <hostname>
I am using a baremetal cluster of 1 master and 2 nodes on premise in my home lab with istio, metallb and calico.
I want to create a DNS server in kubernetes that translates IPs for the hosts on the LAN.
Is it possible to use the coreDNS already installed in k8s?
Yes, it's possible but there are some points to consider when doing that. Most of them are described in the Stackoverflow answer below:
Stackoverflow.com: Questions: How to expose Kubernetes DNS externally
For example: The DNS server would be resolving the queries that are internal to the Kubernetes cluster (like nslookup kubernetes.default.svc.cluster.local).
I've included the example on how you can expose your CoreDNS to external sources and add a Service that would be pointing to some IP address
Steps:
Modify the CoreDNS Service to be available outside.
Modify the configMap of your CoreDNS accordingly to:
CoreDNS.io: Plugins: K8s_external
Create a Service that is pointing to external device.
Test
Modify the CoreDNS Service to be available outside.
As you are new to Kubernetes you are probably aware on how Services work and which can be made available outside. You will need to change your CoreDNS Service from ClusterIP to either NodePort or LoadBalancer (I'd reckon LoadBalancer would be a better idea considering the metallb is used and you will access the DNS server on a port: 53)
$ kubectl edit --namespace=kube-system service/coredns (or kube-dns)
A side note!
CoreDNS is using TCP and UDP simultaneously, it could be an issue when creating a LoadBalancer. Here you can find more information on it:
Metallb.universe.tf: Usage (at the bottom)
Modify the configMap of your CoreDNS
If you would like to resolve domain like for example: example.org you will need to edit the configMap of CoreDNS in a following way:
$ kubectl edit configmap --namespace=kube-system coredns
Add the line to the Corefile:
k8s_external example.org
This plugin allows an additional zone to resolve the external IP address(es) of a Kubernetes service. This plugin is only useful if the kubernetes plugin is also loaded.
The plugin uses an external zone to resolve in-cluster IP addresses. It only handles queries for A, AAAA and SRV records; all others result in NODATA responses. To make it a proper DNS zone, it handles SOA and NS queries for the apex of the zone.
-- CoreDNS.io: Plugins: K8s_external
Create a Service that is pointing to external device.
Following on the link that I've included, you can now create a Service that will point to an IP address:
apiVersion: v1
kind: Service
metadata:
name: test
namespace: default
spec:
clusterIP: None
externalIPs:
- 192.168.200.123
type: ClusterIP
Test
I've used minikube with --driver=docker (with NodePort) but I'd reckon your can use the ExternalIP of your LoadBalancer to check it:
dig #192.168.49.2 test.default.example.org -p 32261 +short
192.168.200.123
where:
#192.168.49.2 - IP address of minikube
test.default.example.org - service-name.namespace.k8s_external_domain
-p 32261 - NodePort port
+short - to limit the output
Additional resources:
Linux.die.net: Man: Dig
My environment is that the ignite client is on kubernetes and the ignite server is running on a normal server.
In such an environment, TCP connections are not allowed from the server to the client.
For this reason, CommunicationSpi(server -> client) cannot be allowed.
What I'm curious about is what issues can occur in situations where Communication Spi is not available?
In this environment, Is there a way to make a CommunicationSpi(server -> client) connection?
In Kubernetes, the service is used to communicate with pods.
The default service type in Kubernetes is ClusterIP
ClusterIP is an internal IP address reachable from inside of the Kubernetes cluster only. The ClusterIP enables the applications running within the pods to access the service.
To expose the pods outside the kubernetes cluster, you will need k8s service of NodePort or LoadBalancer type.
NodePort: Exposes the Service on each Node’s IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You’ll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort> .
Please note that it is needed to have external IP address assigned to one of the nodes in cluster and a Firewall rule that allows ingress traffic to that port. As a result kubeproxy on Kubernetes node (the external IP address is attached to) will proxy that port to the pods selected by the service.
LoadBalancer: Exposes the Service externally using a cloud provider’s load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
Alternatively it is possible to use Ingress
There is a very good article on acessing Kubernetes Pods from Outside of cluster .
Hope that helps.
Edited on 09-Dec-2019
upon your comment I recall that it's possible to use hostNetwork and hostPort methods.
hostNetwork
The hostNetwork setting applies to the Kubernetes pods. When a pod is configured with hostNetwork: true, the applications running in such a pod can directly see the network interfaces of the host machine where the pod was started. An application that is configured to listen on all network interfaces will in turn be accessible on all network interfaces of the host machine.
Example:
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
hostNetwork: true
containers:
- name: nginx
image: nginx
You can check that the application is running with: curl -v http://kubenode01.example.com
Note that every time the pod is restarted Kubernetes can reschedule the pod onto a different node and so the application will change its IP address. Besides that two applications requiring the same port cannot run on the same node. This can lead to port conflicts when the number of applications running on the cluster grows.
What is the host networking good for? For cases where a direct access to the host networking is required.
hostPort
The hostPort setting applies to the Kubernetes containers. The container port will be exposed to the external network at :, where the hostIP is the IP address of the Kubernetes node where the container is running and the hostPort is the port requested by the user.
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 8086
hostPort: 443
The hostPort feature allows to expose a single container port on the host IP. Using the hostPort to expose an application to the outside of the Kubernetes cluster has the same drawbacks as the hostNetwork approach discussed in the previous section. The host IP can change when the container is restarted, two containers using the same hostPort cannot be scheduled on the same node.
What is the hostPort used for? For example, the nginx based Ingress controller is deployed as a set of containers running on top of Kubernetes. These containers are configured to use hostPorts 80 and 443 to allow the inbound traffic on these ports from the outside of the Kubernetes cluster.
To support such a deployment configuration you would need to dance a lot around a network configuration - setting up K8 Services, Ignite AddressResolver, etc. The Ignite community is already aware of this inconvenience and working on an out-of-the-box solution.
Updated
If you run Ignite thick clients in a K8 environment and the servers are on VMs, then you need to enable the TcpCommunicationSpi.forceClientToServerConnections mode to avoid connectivity issues.
If you run Ignite thin clients then configure just provide IPs of servers as described here.
Kubernetes newbie here. Just want to get my fundamental understanding correct. Minikube is known for local development and is it possible for connection outside (not just outside cluster) to access the pods I have deployed in minikube?
I am running my minikube in ec2 instance so I started my minikube with command minikube start --vm-driver=none, which means running minikube with Docker, no VM provisioned. My end goal is to allow connection outside to reach my pods inside the cluster and perform POST request through the pod (for example using Postman).
If yes, I also have my service resource applied using kubectl apply
-f into my minikube using NodePort in yaml file. Also, I also wish to understand port, nodePort, and targetPort correctly. port is
the port number assigned to that particular service, nodePort is the
port number on the node (in my case is my ec2 instance private IP),
targetPort is the port number equivalent to the containerPort I've
assigned in yaml of my deployment. Correct me if I am wrong in this statement.
Thanks.
Yes you can do that
as you have started the minikube with :
minikube start --vm-driver=none
nodePort is the port that a client outside of the cluster will "see". nodePort is opened on every node in your cluster via kube-proxy. You can use nodePort to access the application from outside world. Like https://loadbalancerIP:NodePort
port is the port your service listens on inside the cluster. Let's take this example:
apiVersion: v1
kind: Service
metadata:
name: test-service
spec:
ports:
- port: 8080
targetPort: 8070
nodePort: 31222
protocol: TCP
selector:
component: test-service-app
From inside k8s cluster this service will be reachable via http://test-service.default.svc.cluster.local:8080 (service to service communication inside your cluster) and any request reaching there is forwarded to a running pod on targetPort 8070.
tagetPort is also by default the same value as port if not specified otherwise.
I run the CoreOS k8s cluster on Mac OSX, which means it's running inside VirtualBox + Vagrant
I have in my service.yaml file:
spec:
type: NodePort
When I type:
kubectl get services
I see:
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR
kubernetes 10.100.0.1 <none> 443/TCP <none>
my-frontend 10.100.250.90 nodes 8000/TCP name=my-app
What is the "nodes" external IP? How do I access my-frontend externally?
In addition to "NodePort" types of services there are some additional ways to be able to interact with kubernetes services from outside of cluster:
Use service type "LoadBalancer". It works only for some cloud providers and will not work for virtualbox, but I think it will be good to know about that feature. Link to the documentation
Use one of the latest features called "ingress". Here is description from manual "An Ingress is a collection of rules that allow inbound connections to reach the cluster services. It can be configured to give services externally-reachable urls, load balance traffic, terminate SSL, offer name based virtual hosting etc.". Link to the documentation
If kubernetes is not strict requirements and you can switch to latest openshift origin (which is "kubernetes on steroids") you can use origin feature called "router".
Information about openshift origin.
Information about openshift origin routes
I assume you are using MiniKube for Kubernetes. In such case, to identify your node ip address, use the following command:
.\minikube.exe ip
If the exposed service is of type=Nodeport, to check the exposed port use the following command:
.\kubectl.exe describe service <service-name>
Check for Node port in the result. Also, if you want to have all these details via nice UI, then you can launch the Kubernetes Dashboard present at the following address:
<Node-ip>:30000
The easiest way to get the host ports is kubectl describe services my-frontend.
The node port will be displayed.
Also you can check the api:
api/v1/namespaces/{namespace_name}/services/{service_name}
or list all:
api/v1/namespaces/default/services
Last, you can chose a fixed nodePort in the service.yml
Here is the doc on node addresses: http://kubernetes.io/docs/admin/node/#addresses
You can specify the port number of nodePort when you specify the service. If you didn't manually specify a port, system will allocate one for you. You can kubectl get services -o yaml and find the port at spec.ports[*].nodePort, as suggested in the doc here: https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/services.md#type-nodeport
And you can access your front-end at {nodes' external addresses}:{nodePort}
Hope this helps.