right now i'm accessing my pods (postgres port 5432) trough a service that is exposed, but since gcloud charge for every forwarding rule created, the amount of pods i need to monitor or to execute stuff in it, is costing me more and more, is there a way to create a single expose service for all of my pods? or can i create some sort of vpn? putty tunnel or something? any help would be appreciated!
I'm also using
kubectl exec
If you are looking for a managed solution then Google is offering VPN for that:
https://console.cloud.google.com/networking/vpn/
If you are happy to roll your own then you can create a new Compute instance on the same network where your nodes are and set up openvpn there. This will give you a fix ip as a freebie.
A more advanced solution is if you run openvpn as a pod (or pods) and use a Service with NodePort to expose it. (Optionally manually create a single loadbalacer on google cloud to get a static ip for that.)
At the end of the day the ideal solution depends much on your environment and goal.
Related
Looking for a solution to enable access to services on a cloud-hosted Kubernetes cluster on local network for various developers without each machine having to use kubectl to port forward and actually have the access to the cluster.
Essentially looking for a way to run a docker container or vm which I can expose the ports or even better away to forward local network traffic to clusters dns.
Really stuck looking for solutions, any help would be really appreciated
I'm trying to deploy GitLab on Kubernetes using minikube through this tutorial, but I don't know what values to put in the fields global.hosts.domain, global.hosts.externalIP and certmanager-issuer.email.
The tutorial is very poor in explanations. I'm stuck in this step. Can someone tell me what are this fields and what should I put on them?
I'm trying to deploy GitLab on Kubernetes using minikube through this tutorial, but I don't know what values to put in the fields global.hosts.domain, global.hosts.externalIP and certmanager-issuer.email.
For the domain, you can likely use whatever you'd like, just be aware that when gitlab generates links that are designed to point to itself they won't resolve. You can work-around that with something like dnsmasq or editing /etc/hosts, if it's important to you
For the externalIP, that will be what minikube ip emits, and is the IP through which you will communicate with gitlab (since you will not be able to use the Pod's IP addresses outside of minikube). If gitlab does not use a Service of type NodePort, you're in for some more hoop-jumping to expose those ports via minikube's IP
The certmanager-issuer.email you can just forget about, because it 100% will not issue you a Let's Encrypt cert running on minikube unless they have fixed cermanager to use the dns01 protocol. In order for Let's Encrypt to issue you a cert, they have to connect to the webserver for which they are issuing the cert, and (as you might guess) they will not be able to connect to your minikube IP. If you want to experience SSL on your gitlab instance, then issue the instance a self-signed cert and call it a draw.
The tutorial is very poor in explanations.
That's because what you are trying to do is perilous; minikube is not designed to run an entire gitlab instance, for the above and tens of other reasons. Google Cloud Platform offers generous credits to kick the tires on kubernetes, and will almost certainly have all the things you would need to make that stuff work.
I want to access k8s api resources. my cluster is 1node cluster. kube-api server is listening on 8080 and 6443 port. curl localhost:8080/api/v1 inside node is working. if i hit :8080, its not working because some other service (eureka) is running on this port. this leaves me option to access :6443 . in order to do make api accessible, there are 2 ways.
1- create service for kube-api with some specific port which will target 6443. For that ca.crt , key , token etc are required. How to create and configure such things so that i will be able to access api.
2- make change in waeve (weave is available as service in k8s setup) so that my server can access k8s apis.
anyone of option is fine with me. any help will be appreciated .
my cluster is 1node cluster
One of those words does not mean what you think it does. If you haven't already encountered it, you will eventually discover that the memory and CPU pressure of attempting to run all the components of a kubernetes cluster on a single Node will cause memory exhaustion, and then lots of things won't work right with some pretty horrible error messages.
I can deeply appreciate wanting to start simple, but you will be much happier with a 3 machine cluster than trying to squeeze everything into a single machine. Not to mention the fact that only having a single machine won't surface any networking misconfigurations, which can be a separate frustration when you think everything is working correctly and only then go to scale your cluster up to more Nodes.
some other service (eureka) is running on this port.
Well, at the very real risk of stating the obvious: why not move one of those two services to listen on a separate port from one another? Many cluster provisioning tools (I love kubespray) have a configuration option that allows one to very easily adjust the insecure port used by the apiserver to be a port of your choosing. It can even be a privileged port (that is: less than 1024) because docker runs as root and thus can --publish a port using any number it likes.
If having the :8080 is so important to both pieces of software that it would be prohibitively costly to relocate the port, then consider binding the "eureka" software to the machine's IP and bind the kubernetes apiserver's insecure port to 127.0.0.1 (which is certainly the intent, anyway). If "eureka" is also running in docker, you can change its --publish to include an IP address on the "left hand side" to very cheaply do what I said: --publish ${the_ip}:8080:8080 (or whatever). If it is not using docker, there is still a pretty good chance that the software will accept a "bind address" or "bind host" through which you can enter the ip address, versus "0.0.0.0".
1- create service for kube-api with some specific port which will target 6443. For that ca.crt , key , token etc are required. How to create and configure such things so that i will be able to access api.
Every Pod running in your cluster has the option of declaring a serviceAccountName, which by default is default, and the effect of having a serviceAccountName is that every container in the Pod has access to those components you mentioned: the CA certificate and a JWT credential that enables the Pod to invoke the kubernetes API (which from within the cluster one can always access via: the kubernetes Service IP, the environment variable $KUBERNETES_SERVICE_HOST, or the hostname https://kubernetes -- assuming you are using kube-dns). Those serviceAccount credentials are automatically projected into the container at /var/run/secret/kubernetes.io without requiring that your Pod declare those volumeMounts explicitly.
So, if your concern is that one must have credentials from within the cluster, that concern can go away pretty quickly. If your concern is access from outside the cluster, there are a lot of ways to address that concern which don't directly involve creating all 3 parts of that equation.
I am using Kubernetes Engine on the Google Cloud Platform. I have a pod running a process in a Docker scratch container. I also have a load balancer service that gives me access to the pod from the outside world.
The process running in the pod needs to know what its external IP address is. How can I get this?
Prior to using Kubernetes Engine I was using Compute Engine and could find the external IP address by the following:
curl -H "Metadata-Flavor: Google" http://metadata/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
Are there any internal tools I can use that would be available to my process? Or would I need the process to call an external site that can mirror back the IP address?
Every Pod (unless configured not to do so) has valid kubernetes credentials in /var/run/secrets/kubernetes.io/serviceaccount/token as described here so the answer is to use the kubernetes API to ask the Service in front of the Pod(s) for its status:loadBalancer:ingress:ip: as described here which I have every reason to believe GKE will keep up-to-date with any changes to the load balancer. The kubernetes API is always(?) located at https://kubernetes (that's normally enough, or https://kubernetes.default.svc.cluster.local is its full name), so there should be very little configuration the Pod would need in order to carry out the lookup.
The asterisk to that response is that one must provide the name of the Service to the Pod(s) of the Service sitting in front of it, because (for the most part) there is no way for the Pod to know how many Services point to it.
I have a local Kubernetes cluster on a single machine, and a container that needs to receive a url (like https://www.wikipedia.org/) and extract the text content from it. Essentially I need my pod to connect to the outside world. Since I am using v1.2.5, I need some DNS add-on like SkyDNS, but I cannot find any working example or tutorial on how to set it up. Tutorials like this usually only tell me how to make pods within the cluster talk to each other by DNS look-up.
Therefore, could anyone give me some advice on how to set up and configure an add-on of Kubernetes so that pods can access the public Internet? Thank you very much!
You can simply create your pods with "dnsPolicy: Default", this will give it a resolv.conf just like on the host and it will be able to resolve wikipedia.org. It will not be able to resolve cluster local services.If you're looking to actually deploy kube-dns so you can also resolve cluster local services this is probably the best starting point: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/