Access SkyDNS etcd API on Google Container Engine to Add Custom Records - kubernetes

I'm running a kubernetes cluster on GKE and I would like to discover and access the etcd API from a service pod. The reason I want to do this is to add keys to the SkyDNS hierarchy.
Is there a way to discover (or create/expose) and interact with the etcd service API endpoint on a GKE cluster from application pods?
We have IoT gateway nodes that connect to our cloud services via an SSL VPN to ease management and comms. When a device connects to the VPN I want to update an entry in SkyDNS with the hostname and VPN IP address of the device.
It doesn't make sense to spin another clustered DNS setup since SkyDNS will work great for this and all of the pods in the cluster are already automatically configured to query it first.

I'm running a kubernetes cluster on GKE and I would like to discover and access the etcd API from a service pod. The reason I want to do this is to add keys to the SkyDNS hierarchy.
It sounds like you want direct access to the etcd instance that is backing the DNS service (not the etcd instance that is backing the Kubernetes apiserver, which is separate).
Is there a way to discover (or create/expose) and interact with the etcd service API endpoint on a GKE cluster from application pods?
The etcd instance for the DNS service is an internal implementation detail for the DNS service and isn't designed to be directly accessed. In fact, it's really just a convenient communication mechanism between the kube2sky binary and the skydns binary so that skydns wouldn't need to understand that it was running in a Kubernetes cluster. I wouldn't recommend attempting to access it directly.
In addition, this etcd instance won't even exist in Kubernetes 1.3 installs, since skydns is being replaced by a new DNS binary kubedns.
We have IoT gateway nodes that connect to our cloud services via an SSL VPN to ease management and comms. When a device connects to the VPN I want to update an entry in SkyDNS with the hostname and VPN IP address of the device.
If you create a new service, that will cause the cluster DNS to have a new entry created mapping the service name to the endpoints that back the service. What if you programmatically add a service each time a new IoT device registers rather than trying to configure DNS directly?

Related

Cross cluster communication in GKE Multi-cluster Service

I’m using GKE multi-cluster service and have configured two clusters.
On one cluster I have an endpoint I want to consume and it's hard-coded on address:
redpanda-0.redpanda.processing.svc.cluster.local.
Does anyone know how I can reach this from the other cluster?
EDIT:
I have exported the service, which is then automatically imported into the other cluster. Previously, I have been able to connect to the other cluster using SERVICE_EXPORT_NAME.NAMESPACE.svc.clusterset.local, but then I had to change the endpoint address manually to this exact address. In my new case, the endpoint address is not configurable.

openVPN accesses the K8S cluster, it access the POD of the host where the server is located,cannot access the POD of other hosts in the cluster

I deployed the OpenVPN server in the K8S cluster and deployed the OpenVPN client on a host outside the cluster. However, when I use client access, I can only access the POD on the host where the OpenVPN server is located, but cannot access the POD on other hosts in the cluster.
The network used by the cluster is Calico. I also added the following iptables rules to the openVPN server host in the cluster:
I found that I did not receive the package back when I captured the package of tun0 on the server.
When the server is deployed on hostnetwork, a forward rule is missing in the iptables field.
Not sure how you set up iptables inside the server pod as iptables/netfilter was not accessible on most kube clusters I saw.
If you want to have full access to cluster networking over that OpenVPN server you probably want to use hostNetwork: true on your vpn server. The problem is that you still need proper MASQ/SNAT rule to get response across to your client.
You should investigate your traffic going out of the server pod to see if it has a properly rewritten source address, otherwise the nodes in cluster will have no knowledge on how to route the response.
You probably have a common gateway for your nodes, depending on your kube implementation you might get around this issue by setting the route back to your vpn, but that likely will require some scripting around vpn server it self to make sure the route is updated each time server pod is rescheduled.

How to allow nodes of one GKE cluster to connect to another GKE

I have a GKE clusters setup, dev and stg let's say, and wanted apps running in pods on stg nodes to connect to dev master and execute some commands on that's GKE - I have all the setup I need and when I add from hand IP address of the nodes all works fine but the IP's are changing,
so my question is how can I add to Master authorised networks the ever-changing default-pool IPs of nodes from the other cluster?
EDIT: I think I found the solution, it's not the node IP but the NAT IP I have added to authorized networks, so assuming I don't change those I just need to add the NAT I guess, unless someone knows better solution ?
I'm not sure that you are doing the correct things. In kubernetes, your communication is performed between services, that represents deployed pods, on one or several nodes.
When you communicate with the outside, you reach an endpoint (an API or a specific port). The endpoint is materialized by a loadbalancer that routes the traffic.
Only the kubernetes master care about the node as resources (CPU, memory, GPU,...) provider inside the cluster. You should never have to directly reach the node of a cluster without using the standard way.
Potentially you can reach the NodePort service exposal on the NodeIP+servicePort.
What you really need to do is configure the kubectl in jenkins pipeline to connect to GKE Master IP. The master is responsible for accepting your commands (rollback, deployment, etc). See Configuring cluster access for kubectl
The Master IP is available in the Kubernetes Engine console along with the Certificate Authority certificate. A good approach is to use a service account token to authenticate with the master. See how to Login to GKE via service account with token.

Google Kubernetes Engine Service loadBalancerSourceRanges not allowing connection on IP range

I'm exposing an application run on a GKE cluster using a LoadBalancer service. By default, the LoadBalancer creates a rule in the Google VPC firewall with IP range 0.0.0.0/0. With this configuration, I'm able to reach the service in all situations.
I'm using an OpenVPN server inside my default network to prevent outside access to GCE instances on a certain IP range. By modifying the service .yaml file loadBalancerSourceRanges value to match the IP range of my VPN server, I expected to be able to connect to the Kubernetes application while connected to the VPN, but not otherwise. This updated the Google VPN firewall rule with the range I entered in the .yaml file, but didn't allow me to connect to the service endpoint. The Kubernetes cluster is located in the same network as the OpenVPN server. Is there some additional configuration that needs to be used other than setting loadBalancerSourceRanges to the desired ingress IP range for the service?
You didn't mention the version of this GKE cluster; however, it might be helpful to know that, beginning with Kubernetes version 1.9.x, automatic firewall rules have changed such that workloads in your Google Kubernetes Engine cluster cannot communicate with other Compute Engine VMs that are on the same network, but outside the cluster. This change was made for security reasons. You can replicate the behavior of older clusters (1.8.x and earlier) by setting a new firewall rule on your cluster. You can see this notification on the Release Notes published in the official documentation

how kubernetes service works?

Of all the concepts from Kubernetes, I find service working mechanism is the most difficult to understand
Here is what I imagine right now:
kube-proxy in each node listen to any new service/endpoint in master API controller
If there is any new service/endpoint, it adds a rule to that node's iptables
For NodePort service, external client has to access new service through one of the node's ip and NodePort. The node will forward the request to the new service IP
Is it correct? There are still a few things I'm still not clear:
Are services lying within nodes? If so, can we ssh into nodes and inspect how services work?
Are service IPs virtual IPs and only accessible within nodes?
Most of the diagrams that I see online draw services as crossing all nodes, which make it even more difficult to imagine
kube-proxy in each node listen to any new service/endpoint in master API controller
Kubernetes uses etcd to share the current cluster configuration information across all nodes (including pods, services, deployments, etc.).
If there is any new service/endpoint, it adds a rule to that node's iptables
Internally kubernetes has a so called Endpoint Controller that is responsible for modifying the DNS configuration of the virtual cluster network to make service endpoints available via DNS (and environment variables).
For NodePort service, external client has to access new service through one of the node's ip and NodePort. The node will forward the request to the new service IP
Depending on the service type additional action is taken, e.g. to make a port available on the nodes through an automatically created clusterIP service for type nodePort. Or an external load balancer is created with the cloud provider, etc.
Are services lying within nodes? If so, can we ssh into nodes and inspect how services work?
As explained, services are manifested in the cluster configuration, the endpoint controller as well as additional things, like the clusterIP services, load balancers, etc. I cannot see a need to ssh into nodes to inspect services. Typically interacting with the cluster api should be sufficient to investigate/update the service configuration.
Are service IPs virtual IPs and only accessible within nodes?
Service IPs, like POD IPs are virtual and accessible from within the cluster network. There is a global allocation map in etcd that maintains the complete list that allows allocating unique new ones. For more information on the networking model read this blog.
For more detailed information see the docs for kubernetes components and services.