I'm following the kubernetes VM setup guide. After uploading the vmdk successfully, the kubernetes-master VM is started up, but it has no IPV4 address. Therefore, the next step in the script fails because it tries to SSH to the kubernetes master to run the setup of the nodes. Other VMs created via the web client correctly receive an IP4 address, but any VM created from govc does not.
This was a customer's setup and it turns out they did not have DHCP setup on the selected network, therefore, no IP.
Related
Is there any other way except port-forwarding, I can access the apps running inside my K8s cluster via http://localhost:port from my host operating system.
For example
I am running minikube setup to practise the K8s and I deployed three pods along with their services, I choose three different service type, Cluster IP, nodePort and LoadBalancer.
For Cluster IP, I can use port-forward option to access my app via localhost:port, but the problem is, I have to leave that command running and if for some reason, it is distributed, connection will be dropped, so is there any alternate solution here ?
For nodePort, I can only access this via minikube node IP not with the localhost, therefore, if I have to access this remotely, I wont have a route to this node IP address
For LoadBalancer, not a valid option as I am running minikube in my local system not in cloud.
Please let me know if there is any other solution to this problem, the reason why I am asking this when I deploy same application via docker compose, I can access all these services via localhost:port and I can even call them via VM_IP:port from other systems.
Thanks,
-Rafi
I have a system that is composed of three main components: a k8s cluster, a bind9 VM "internal DNS server" and a replicaSet of mongoDB (each mongo machine is a VM). Everything is in GCP.
The k8s cluster is in one network (lets call it net1) and the bind9 and mongoVMs are on a different network (net2).
I have successfully configured bind9 to serve as the DNS for all VMs in both networks, however when I try to send kube-dns to use the bind9's external IP as it's stubdomain for my somedomain.com domain, DNS resolution inside pods fail. [namely, pinging foo.somedomain.com produces an "unknown host" error].
I have done the following:
added the cluster's external IP into the allow-query line of bind9.
configured the proper firewall rules. communication over port 53 is free between cluster's pods and bind9's VM.
my configMap has this:
stubDomains
{"somedomain.com": ["externalIP for bind9 VM"], "internal": [ "169.254.169.254" ] }
When I run this, DNS resolution fails. But if I switch to a bind9 VM that is inside net1, and uses its internal IP, this works.
This is not a communication/permission problem. traceroute via port 53 works.
Please advice?
I deployed the OpenVPN server in the K8S cluster and deployed the OpenVPN client on a host outside the cluster. However, when I use client access, I can only access the POD on the host where the OpenVPN server is located, but cannot access the POD on other hosts in the cluster.
The network used by the cluster is Calico. I also added the following iptables rules to the openVPN server host in the cluster:
I found that I did not receive the package back when I captured the package of tun0 on the server.
When the server is deployed on hostnetwork, a forward rule is missing in the iptables field.
Not sure how you set up iptables inside the server pod as iptables/netfilter was not accessible on most kube clusters I saw.
If you want to have full access to cluster networking over that OpenVPN server you probably want to use hostNetwork: true on your vpn server. The problem is that you still need proper MASQ/SNAT rule to get response across to your client.
You should investigate your traffic going out of the server pod to see if it has a properly rewritten source address, otherwise the nodes in cluster will have no knowledge on how to route the response.
You probably have a common gateway for your nodes, depending on your kube implementation you might get around this issue by setting the route back to your vpn, but that likely will require some scripting around vpn server it self to make sure the route is updated each time server pod is rescheduled.
I have a REST API running locally on my laptop at https://localhost:5001/something. I want that to be reachable inside the Kubernetes cluster from a K8s DNS name. For example, an application running inside a Pod could use some-service instead of needing the entire Url.
Also, since localhost is relative to the host machine, how would I get the Service or ExternalName to reach localhost on the host machine, instead of inside the K8s cluster?
I tried docker.host.internal (as suggested here) but that didn't work.
And this from K8s documentation says that it can't be the loopback:
The endpoint IPs must not be: loopback (127.0.0.0/8 for IPv4, ::1/128 for IPv6), or link-local (169.254.0.0/16 and 224.0.0.0/24 for IPv4, fe80::/64 for IPv6).
I'm running:
Host Machine: Ubuntu 20.04
K8s: k3d
Web API: (.Net Core 3.1 on Linux, created by dotnet new webapi MyAPI)
Telepresence is a tool created for that quick local testing your application with k8s cluster. It allows you to run single service locally while connecting it to remote Kubernetes cluster.
It substitutes a two-way network proxy for your normal pod running in the Kubernetes cluster. This pod proxies data from your Kubernetes environment (e.g., TCP connections, environment variables, volumes) to the local process. The local process has its networking transparently overridden so that DNS calls and TCP connections are routed through the proxy to the remote Kubernetes cluster.
Alternative way would be to create service that is being backed by ssh server running in a pod and use reverse tunnel to open reverse connection to your local machine.
We've just shipped a standalone service fabric cluster to a customer site with a misconfiguration. Our setup:
Service Fabric 6.4
2 Windows servers, each running 3 Hyper-V virtual machines that host the cluster
We configured the cluster locally using static IP addresses for the nodes. When the servers arrived, the IP addresses of the Hyper-V machines were changed to conform to the customer's available IP addresses. Now we can't connect to the cluster, since every IP in the clusterConfig is wrong. Is there any way we can recover from this without re-installing the cluster? We'd prefer to keep the new IP's assigned to the VM's if possible.
I've tested this only on my test environment (I've never done this on production before so do it on your own risk), but since you can't connect to the cluster anyway I think it is worth to try.
Connect to each virtual machine which is a part of the cluster and do following steps:
Locate Service Fabric Cluster files (usually C:\ProgramData\SF\{nodeName}\Fabric)
Take ClusterManifest.current.xml file and copy it to temp folder (for example C:\temp)
Go to Fabric.Data subfolder, take InfrastructureManifest.xml file and copy it to the same temp folder
Inside each file you have copied change IP addresses for nodes to correct values
Stop FabricHostSvc process by running net stop FabricHostSvc command in powershell
After successful stop run this powershell (admin mode) command to update node cluster configuration:
New-ServiceFabricNodeConfiguration -ClusterManifestPath C:\temp\ClusterManifest.current.xml -InfrastructureManifestPath C:\temp\InfrastructureManifest.xml
Once the config is updated start FabricHostSvc net start FabricHostSvc
Do this for each node and pray for the best.