I have a system that is composed of three main components: a k8s cluster, a bind9 VM "internal DNS server" and a replicaSet of mongoDB (each mongo machine is a VM). Everything is in GCP.
The k8s cluster is in one network (lets call it net1) and the bind9 and mongoVMs are on a different network (net2).
I have successfully configured bind9 to serve as the DNS for all VMs in both networks, however when I try to send kube-dns to use the bind9's external IP as it's stubdomain for my somedomain.com domain, DNS resolution inside pods fail. [namely, pinging foo.somedomain.com produces an "unknown host" error].
I have done the following:
added the cluster's external IP into the allow-query line of bind9.
configured the proper firewall rules. communication over port 53 is free between cluster's pods and bind9's VM.
my configMap has this:
stubDomains
{"somedomain.com": ["externalIP for bind9 VM"], "internal": [ "169.254.169.254" ] }
When I run this, DNS resolution fails. But if I switch to a bind9 VM that is inside net1, and uses its internal IP, this works.
This is not a communication/permission problem. traceroute via port 53 works.
Please advice?
Related
Right now im setting up a Kubernetes cluster with Azure Kubernetes Service (AKS).
Im using the feature "Bring your own Subnet" and Kubenet as a network mode.
As you can see in the diagram, on the left side is an example vm.
In the middle is a load balancer I set up in the cluster, who directs incoming traffic to all pods with the label "webserver", this works fine.
On the right side is an example node of the cluster.
My problem is the outgoing traffic of nodes. As you would expect, if you try to ssh into a vm in subnet 1 from a node in subnet 2, it uses the nodes-ip for connecting, the .198. (Red Line)
I would like to route the traffic over the load balancer, so the incoming ssh connection at the vm in subnet 1 has a source address of .196. (Green Line)
Reason: We have got a central firewall. To open ports, I have to specify the ip-address, from which the package is coming from. For this case, I would like to route the traffic over on central load balancer so only one ip has to be allowed through in the firewall. Otherwise, every package would have the source ip of the node.
Is this possible?
I have tried to look this use case up in the azure docs, but most of the times it talks about the usage of public ips, which i am not using in this case.
I deployed the OpenVPN server in the K8S cluster and deployed the OpenVPN client on a host outside the cluster. However, when I use client access, I can only access the POD on the host where the OpenVPN server is located, but cannot access the POD on other hosts in the cluster.
The network used by the cluster is Calico. I also added the following iptables rules to the openVPN server host in the cluster:
I found that I did not receive the package back when I captured the package of tun0 on the server.
When the server is deployed on hostnetwork, a forward rule is missing in the iptables field.
Not sure how you set up iptables inside the server pod as iptables/netfilter was not accessible on most kube clusters I saw.
If you want to have full access to cluster networking over that OpenVPN server you probably want to use hostNetwork: true on your vpn server. The problem is that you still need proper MASQ/SNAT rule to get response across to your client.
You should investigate your traffic going out of the server pod to see if it has a properly rewritten source address, otherwise the nodes in cluster will have no knowledge on how to route the response.
You probably have a common gateway for your nodes, depending on your kube implementation you might get around this issue by setting the route back to your vpn, but that likely will require some scripting around vpn server it self to make sure the route is updated each time server pod is rescheduled.
I have a REST API running locally on my laptop at https://localhost:5001/something. I want that to be reachable inside the Kubernetes cluster from a K8s DNS name. For example, an application running inside a Pod could use some-service instead of needing the entire Url.
Also, since localhost is relative to the host machine, how would I get the Service or ExternalName to reach localhost on the host machine, instead of inside the K8s cluster?
I tried docker.host.internal (as suggested here) but that didn't work.
And this from K8s documentation says that it can't be the loopback:
The endpoint IPs must not be: loopback (127.0.0.0/8 for IPv4, ::1/128 for IPv6), or link-local (169.254.0.0/16 and 224.0.0.0/24 for IPv4, fe80::/64 for IPv6).
I'm running:
Host Machine: Ubuntu 20.04
K8s: k3d
Web API: (.Net Core 3.1 on Linux, created by dotnet new webapi MyAPI)
Telepresence is a tool created for that quick local testing your application with k8s cluster. It allows you to run single service locally while connecting it to remote Kubernetes cluster.
It substitutes a two-way network proxy for your normal pod running in the Kubernetes cluster. This pod proxies data from your Kubernetes environment (e.g., TCP connections, environment variables, volumes) to the local process. The local process has its networking transparently overridden so that DNS calls and TCP connections are routed through the proxy to the remote Kubernetes cluster.
Alternative way would be to create service that is being backed by ssh server running in a pod and use reverse tunnel to open reverse connection to your local machine.
I'm trying to create a Windows Server Failover Cluster on Windows Server 2016 in Azure, using this article https://clusteringformeremortals.com/2016/04/23/deploying-microsoft-sql-server-2014-failover-clusters-in-azure-resource-manager-arm/
However, when I execute New-Cluster -Name sql-sql-cls -Node sql-sql-0,sql-sql-1 -StaticAddress 10.0.192.101 -NoStorage I get New-Cluster : Static address '10.0.192.101' was not found on any cluster network. My VM1 has 10.0.192.5 IP, and VM2 has 10.0.192.6 IP. How can I fix this?
Add a load balancer to the same subnet as the network cards that clister is on and use the ip address that gets assigned to the load balancer.
The fix seems to be that all the nodes must have a Default Gateway IP address. It doesn't have to be a real gateway, just an IP in the same range.
And now the cluster is created with no problems. After the cluster is running, you can remove the gateway IP address.
I built a kubernetes cluster, using flannel overlay network. The problem is one of the service ip isn't always accessible.
I tested within the cluster, by telneting the service ip and port, ended in connection timeout. Checked with netstat, the connection was always in "SYN_SENT" state, seemed that peer didn't accept connection.
But if I telnet to the pod ip and port that backed the service directly, the connection could be made successfully.
It only happened to one of the service, other services are ok.
And if I scaled the backend pod to a larger value, like 2. Then some of requests to the service ip can succeed. It seemed that the service wasn't able to connect to one of the backed pod.
Which component may be the cause of such problem? My service configuration, kube-proxy or flannel?
Check the discussion here: https://github.com/kubernetes/kubernetes/issues/38802
It's required to sysctl net.bridge.bridge-nf-call-iptables=1 on nodes.