Kubernetes UI on Google Container Engine - kubernetes

Is there any way to access the UI on the GKE service?
I tried following the information on https://github.com/kubernetes/kubernetes/blob/v1.0.6/docs/user-guide/ui.md
And got this
Error: 'empty tunnel list.'
Trying to reach: 'http://10.64.xx.xx:8080/'
Is this feature turned on ?

That error means that the master can't communicate with the nodes in your cluster. Have you deleted the instances from your cluster, or messed with the firewalls? There should be a firewall allowing access SSH to the nodes in the cluster from the master's IP address, and an entry in your project-wide metadata with the master's public SSH key.

Something to check.. make sure you haven't added ssh keys to the cluster nodes metadata. I did this a few weeks back... opened a support case.. and found that I should have added the keys to the project metadata instead.

Related

Why does K8s app fail to connect MongoDB atlas? - persist k8s nodes IP's

Just trying to make an app on k8s to connect to MongoDB atlass,
So far tried the following:
Changed the DNSpolicy to Default and many others - no luck
Created nginx-ingress link- so have the main IP address of the cluster
Added that IP to IP access list - but still no luck
The cluster tier is M2 - so no private peering or private endpoints.
The Deployment/Pod that is trying to connect will not have an a DNS assigned to it, it is simply a service running inside of the k8s and processing rabbitmq messages.
So not sure on what I should whitelist if the service is never exposed.
I assume it would have to be something with Nodes or K8s egress or something, but not sure where to even look
Tried pretty much everything I could and still cannot find clear documentation on how to achieve the desired result apart from whitelisting All IP addresses
UPDATE: Managed to find this article https://www.digitalocean.com/community/questions/urgent-how-to-connect-to-mongodb-atlas-cluster-from-a-kubernetes-pod
So now im trying to find a way to persist Node IP addresses, as I understand during the scale up or down or upgrade of nodes it will create new IP addresses.
So is there a way to persist them?

How to make cluster nodes private on Google Kubernetes Engine?

I noticed every node in a cluster has an external IP assigned to it. That seems to be the default behavior of Google Kubernetes Engine.
I thought the nodes in my cluster should be reachable from the local network only (through its virtual IPs), but I could even connect directly to a mongo server running on a pod from my home computer just by connecting to its hosting node (without using a LoadBalancer).
I tried to make Container Engine not to assign external IPs to newly created nodes by changing the cluster instance template settings (changing property "External IP" from "Ephemeral" to "None"). But after I did that GCE was not able to start any pods (Got "Does not have minimum availability" error). The new instances did not even show in the list of nodes in my cluster.
After switching back to the default instance template with external IP everything went fine again. So it seems for some reason Google Kubernetes Engine requires cluster nodes to be public.
Could you explain why is that and whether there is a way to prevent GKE exposing cluster nodes to the Internet? Should I set up a firewall? What rules should I use (since nodes are dynamically created)?
I think Google not allowing private nodes is kind of a security issue... Suppose someone discovers a security hole on a database management system. We'd feel much more comfortable to work on fixing that (applying patches, upgrading versions) if our database nodes are not exposed to the Internet.
GKE recently added a new feature allowing you to create private clusters, which are clusters where nodes do not have public IP addresses.
This is how GKE is designed and there is no way around it that I am aware of. There is no harm in running kubernetes nodes with public IPs, and if these are the IPs used for communication between nodes you can not avoid it.
As for your security concern, if you run that example DB on kubernetes, even if you go for public IP it would not be accessible, as this would be only on the internal pod-to-pod networking, not the nodes them selves.
As described in this article, you can use network tags to identify which GCE VMs or GKE clusters are subject to certain firewall rules and network routes.
For example, if you've created a firewall rule to allow traffic to port 27017, 27018, 27019, which are the default TCP ports used by MongoDB, give the desired instances a tag and then use that tag to apply the firewall rule that allows those ports access to those instances.
Also, it is possible to create GKE cluster with applying the GCE tags on all nodes in the new node pool, so the tags can be used in firewall rules to allow/deny desired/undesired traffic to the nodes. This is described in this article under --tags flag.
Kubernetes Master is running outside your network and it needs to access your nodes. This could the the reason for having public IPs.
When you create your cluster, there are some firewall rules created automatically. These are required by the cluster, and there's e.g. ingress from master and traffic between the cluster nodes.
Network 'default' in GCP has readymade firewall rules in place. These enable all SSH and RDP traffic from internet and enable pinging of your machines. These you can remove without affecting the cluster and your nodes are not visible anymore.

Joining an external Node to an existing Kubernetes Cluster

I have a custom Kubernetes Cluster (deployed using kubeadm) running on Virtual Machines from an IAAS Provider. The Kubernetes Nodes have no Internet facing IP Adresses (except for the Master Node, which I also use for Ingress).
I'm now trying to join a Machine to this Cluster that is not hosted by my main IAAS provider. I want to do this because I need specialized computing resources for my application that are not offered by the IAAS.
What is the best way to do this?
Here's what I've tried already:
Run the Cluster on Internet facing IP Adresses
I have no trouble joining the Node when I tell kube-apiserver on the Master Node to listen on 0.0.0.0 and use public IP Adresses for every Node. However, this approach is non-ideal from a security perspective and also leads to higher cost because public IP Adresses have to be leased for Nodes that normally don't need them.
Create a Tunnel to the Master Node using sshuttle
I've had moderate success by creating a tunnel from the external Machine to the Kubernetes Master Node using sshuttle, which is configured on my external Machine to route 10.0.0.0/8 through the tunnel. This works in principle, but it seems way too hacky and is also a bit unstable (sometimes the external machine can't get a route to the other nodes, I have yet to investigate this problem further).
Here are some ideas that could work, but I haven't tried yet because I don't favor these approaches:
Use a proper VPN
I could try to use a proper VPN tunnel to connect the Machine. I don't favor this solution because it would add a (admittedly quite small) overhead to the Cluster.
Use a cluster federation
It looks like kubefed was made specifically for this purpose. However, I think this is overkill in my case: I'm only trying to join a single external Machine to the Cluster. Using Kubefed would add a ton of overhead (Federation Control Plane on my Main Cluster + Single Host Kubernetes Deployment on the external machine).
I couldn't think about any better solution than a VPN here. Especially since you have only one isolated node, it should be relatively easy to make the handshake happen between this node and your master.
Routing the traffic from "internal" nodes to this isolated node is also trivial. Because all nodes already use the master as their default gateway, modifying the route table on the master is enough to forward the traffic from internal nodes to the isolated node through the tunnel.
You have to be careful with the configuration of your container network though. Depending on the solution you use to deploy it, you may have to assign a different subnet to the Docker bridge on the other side of the VPN.

kubeadm init on CentOS 7 using AWS as cloud provider enters a deadlock state

I am trying to install Kubernetes 1.4 on a CentOS 7 cluster on AWS (the same happens with Ubuntu 16.04, though) using the new kubeadm tool.
Here's the output of the command kubeadm init --cloud-provider aws on the master node:
# kubeadm init --cloud-provider aws
<cmd/init> cloud provider "aws" initialized for the control plane. Remember to set the same cloud provider flag on the kubelet.
<master/tokens> generated token: "980532.888de26b1ef9caa3"
<master/pki> created keys and certificates in "/etc/kubernetes/pki"
<util/kubeconfig> created "/etc/kubernetes/kubelet.conf"
<util/kubeconfig> created "/etc/kubernetes/admin.conf"
<master/apiclient> created API client configuration
<master/apiclient> created API client, waiting for the control plane to become ready
The issue is that the control plane does not become ready and the command seems to enter a deadlock state. I also noticed that if the --cloud-provider flag is not provided, pulling images from Amazon EC2 Container Registry does not work, and when creating a service with type LoadBalancer an Elastic Load Balancer is not created.
Has anyone run kubeadm using aws as cloud provider?
Let me know if any further information is needed.
Thanks!
I launched a cluster with kubeadm on AWS recently (kubernetes 1.5.1), and it was stuck on same step as your does. To solve it I had to add "--api-advertise-addresses=LOCAL-EC2-IP", it didn't work with external IP (which kubeadm probably fetches itself, when not specified other IP). So it's either a network connectivity issue (try temporarily a 0.0.0.0/0 security group rule on that master instance), or something else... In my case was a network issue, it wasn't able to connect to itself using its own external IP :)
Regarding PV and ELB integrations, I actually did launch a "PersistentVolumeClaim" with my MongoDB cluster and it works (it created the volume and attached to one of the slave nodes)
here is it for example:
PV created and attached to slave node
So latest version of kubeadm that ships with kubernetes 1.5.1 should work for you too!
One thing to note: you must have proper IAM role permission to create resources (assign your master node, IAM role with something like "EC2 full access" during testing, you can tune it later to allow only the few needed actions)
Hope it helps.
The documentation (as of now) clearly states the following in the limitations:
The cluster created here doesn’t have cloud-provider integrations, so for example won’t work with (for example) Load Balancers (LBs) or Persistent Volumes (PVs). To easily obtain a cluster which works with LBs and PVs Kubernetes, try the “hello world” GKE tutorial or one of the other cloud-specific installation tutorials.
http://kubernetes.io/docs/getting-started-guides/kubeadm/
There are a couple of possibilities I am aware of here -:
1) In older kubeadm versions selinux blocks access at this point
2) If you are behind a proxy you will need to add the usual to the kubeadm environment -:
HTTP_PROXY
HTTPS_PROXY
NO_PROXY
Plus, which I have not seen documented anywhere -:
KUBERNETES_HTTP_PROXY
KUBERNETES_HTTPS_PROXY
KUBERNETES_NO_PROXY

How to re-connect to Amazon kubernetes cluster after stopping & starting instances?

I create a cluster for trying out kubernetes using cluster/kube-up.sh in Amazon EC2. Then I stop it to save money when not using it. Next time I start the master & minion instances in amazon, *~/.kube/config has old IP-s for the cluster master as EC2 assigns new public IP to the instances.
Currently I havent found way to provide Elastic IP-s to cluster/kube-up.sh so that consistent IP-s between stopping & starting instances would be set in place. Also the certificate in ~/.kube/config for the old IP so manually changing IP doesn't work either:
Running: ./cluster/../cluster/aws/../../cluster/../_output/dockerized/bin/darwin/amd64/kubectl get pods --context=aws_kubernetes
Error: Get https://52.24.72.124/api/v1beta1/pods?namespace=default: x509: certificate is valid for 54.149.120.248, not 52.24.72.124
How to make kubectl make queries against the same kubernetes master on a running on different IP after its restart?
If the only thing that has changed about your cluster is the IP address of the master, you can manually modify the master location by editing the file ~/.kube/config (look for the line that says "server" with an IP address).
This use case (pausing/resuming a cluster) isn't something that we commonly test for so you may encounter other issues once your cluster is back up and running. If you do, please file an issue on the GitHub repository.
I'm not sure which version of Kubernetes you were using but in v1.0.6 you can pass MASTER_RESERVED_IP environment variable to kube-up.sh to assign a given Elastic IP to Kubernetes Master Node.
You can check all the available options for kube-up.sh in config-default.sh file for AWS in Kubernetes repository.