I'm trying to deploy GitLab on Kubernetes using minikube through this tutorial, but I don't know what values to put in the fields global.hosts.domain, global.hosts.externalIP and certmanager-issuer.email.
The tutorial is very poor in explanations. I'm stuck in this step. Can someone tell me what are this fields and what should I put on them?
I'm trying to deploy GitLab on Kubernetes using minikube through this tutorial, but I don't know what values to put in the fields global.hosts.domain, global.hosts.externalIP and certmanager-issuer.email.
For the domain, you can likely use whatever you'd like, just be aware that when gitlab generates links that are designed to point to itself they won't resolve. You can work-around that with something like dnsmasq or editing /etc/hosts, if it's important to you
For the externalIP, that will be what minikube ip emits, and is the IP through which you will communicate with gitlab (since you will not be able to use the Pod's IP addresses outside of minikube). If gitlab does not use a Service of type NodePort, you're in for some more hoop-jumping to expose those ports via minikube's IP
The certmanager-issuer.email you can just forget about, because it 100% will not issue you a Let's Encrypt cert running on minikube unless they have fixed cermanager to use the dns01 protocol. In order for Let's Encrypt to issue you a cert, they have to connect to the webserver for which they are issuing the cert, and (as you might guess) they will not be able to connect to your minikube IP. If you want to experience SSL on your gitlab instance, then issue the instance a self-signed cert and call it a draw.
The tutorial is very poor in explanations.
That's because what you are trying to do is perilous; minikube is not designed to run an entire gitlab instance, for the above and tens of other reasons. Google Cloud Platform offers generous credits to kick the tires on kubernetes, and will almost certainly have all the things you would need to make that stuff work.
Related
I have very little knowledge of how kubernetes works and I’m trying to learn. I have some difficulties to understand how I can use kubernetes to deploy my DB (CouchDB) the reverse proxy (nginx) and the ssl certificate (letsencrypt with certbot-auto).
I run CentOS 8 and have installed podman for the containers. I can install each one in different containers within the same pod and I can make them communicate properly.
What I don’t understand is how can I use kubernetes to properly deploy all of these containers and scale them in a cluster.
My questions are the following:
Where should I start to make kubernetes work with these three components? Should I install the three containers first with their configuration (the DB can be configured to handle clusters but my understanding is that kubernetes handles clusters. So I’m wondering if I have to configure the DB for the cluster and hence install two nodes)
Should I install letsencrypt with certbot? I don’t understand how kubernetes can deploy new pods to have them work with letsencrypt automatically configured
If anyone can give me the steps to get this done it would be really great...I just don’t really know where to start and the docs and tutorials are a bit confusing.
I think you need to deploy two applications for your DB and Nginx, but for your certificates, we have different methods to have letsencrypt on kubernetes
for letsencrypt and nginx these two articles could help you to get some insights about what you need to do
Nginx & LetsEncrypt and this one Let’s Encrypt on Kubernetes
and for CouchDB this article may help you CouchDB on Kubernetes, in this article mentions NFS as storage but you can have your own
The goal is to enable Kubernetes api server to connect to resources on the internet when it is on a private network on which internet resources can only be accessed through a proxy.
Background:
A kubernetes cluster is spun up using kubespray containing two apiserver instances that run on two VMs and are controlled via a manifest file. The Azure AD is being used as the identity provider for authentication. In order for this to work the API server needs to initialize its OIDC component by connecting to Microsoft and downloading some keys that are used to verify tokens issued by Azure AD.
Since the Kubernetes cluster is on a private network and needs to go through a proxy before reaching the internet, one approach was to set https_proxy and no_proxy in the kubernetes API server container environment by adding this to the manifest file. The problem with this approach is that when using Istio to manage access to APIs, no_proxy needs to be updated whenever a new service is added to the cluster. One solution could have been to add a suffix to every service name and set *.suffix in no proxy. However, it appears that using wildcards in the no_proxy configuration is not supported.
Is there any alternate way for the Kubernetes API server to reach Microsoft without interfering with other functionality?
Please let me know if any additional information or clarifications are needed.
I'm not sure how you would have Istio manage the egress traffic for your Kubernetes masters where your kube-apiservers run, so I wouldn't recommend it. As far as I understand, Istio is generally used to manage (ingress/egress/lb/metrics/etc) actual workloads in your cluster and these workloads generally run on your nodes, not masters. I mean the kube-apiserver actually manages the CRDs that Istio uses.
Most people use Docker on their masters, you can use the proxy environment variables for your containers like you mentioned.
We tried a couple of solutions to avoid having to set http(s)_proxy and no_proxy env variables in the kube-apiserver and constantly whitelist new services in the cluster...
Introduce a self managed proxy server which would determine what traffic goes is forwarded to an internet connected proxy and what traffic is not proxied:
squid proxy seemed to do the trick by defining some ACLs. One issue we had was that node names were not resolved by kube-dns so we had to add manual entries into the hosts files of containers (not sure how these were handled by default).
we also tried writing a proxy using node but it had trouble with https in some scenarios.
Introduce a self managed identity provider between azure and our k8s cluster which was configured to use the internet connected proxy this avoiding having to configure the proxy in the kube-apiserver
We landed up going with option 2 as it gave us more flexibility in the long term.
I want to access k8s api resources. my cluster is 1node cluster. kube-api server is listening on 8080 and 6443 port. curl localhost:8080/api/v1 inside node is working. if i hit :8080, its not working because some other service (eureka) is running on this port. this leaves me option to access :6443 . in order to do make api accessible, there are 2 ways.
1- create service for kube-api with some specific port which will target 6443. For that ca.crt , key , token etc are required. How to create and configure such things so that i will be able to access api.
2- make change in waeve (weave is available as service in k8s setup) so that my server can access k8s apis.
anyone of option is fine with me. any help will be appreciated .
my cluster is 1node cluster
One of those words does not mean what you think it does. If you haven't already encountered it, you will eventually discover that the memory and CPU pressure of attempting to run all the components of a kubernetes cluster on a single Node will cause memory exhaustion, and then lots of things won't work right with some pretty horrible error messages.
I can deeply appreciate wanting to start simple, but you will be much happier with a 3 machine cluster than trying to squeeze everything into a single machine. Not to mention the fact that only having a single machine won't surface any networking misconfigurations, which can be a separate frustration when you think everything is working correctly and only then go to scale your cluster up to more Nodes.
some other service (eureka) is running on this port.
Well, at the very real risk of stating the obvious: why not move one of those two services to listen on a separate port from one another? Many cluster provisioning tools (I love kubespray) have a configuration option that allows one to very easily adjust the insecure port used by the apiserver to be a port of your choosing. It can even be a privileged port (that is: less than 1024) because docker runs as root and thus can --publish a port using any number it likes.
If having the :8080 is so important to both pieces of software that it would be prohibitively costly to relocate the port, then consider binding the "eureka" software to the machine's IP and bind the kubernetes apiserver's insecure port to 127.0.0.1 (which is certainly the intent, anyway). If "eureka" is also running in docker, you can change its --publish to include an IP address on the "left hand side" to very cheaply do what I said: --publish ${the_ip}:8080:8080 (or whatever). If it is not using docker, there is still a pretty good chance that the software will accept a "bind address" or "bind host" through which you can enter the ip address, versus "0.0.0.0".
1- create service for kube-api with some specific port which will target 6443. For that ca.crt , key , token etc are required. How to create and configure such things so that i will be able to access api.
Every Pod running in your cluster has the option of declaring a serviceAccountName, which by default is default, and the effect of having a serviceAccountName is that every container in the Pod has access to those components you mentioned: the CA certificate and a JWT credential that enables the Pod to invoke the kubernetes API (which from within the cluster one can always access via: the kubernetes Service IP, the environment variable $KUBERNETES_SERVICE_HOST, or the hostname https://kubernetes -- assuming you are using kube-dns). Those serviceAccount credentials are automatically projected into the container at /var/run/secret/kubernetes.io without requiring that your Pod declare those volumeMounts explicitly.
So, if your concern is that one must have credentials from within the cluster, that concern can go away pretty quickly. If your concern is access from outside the cluster, there are a lot of ways to address that concern which don't directly involve creating all 3 parts of that equation.
right now i'm accessing my pods (postgres port 5432) trough a service that is exposed, but since gcloud charge for every forwarding rule created, the amount of pods i need to monitor or to execute stuff in it, is costing me more and more, is there a way to create a single expose service for all of my pods? or can i create some sort of vpn? putty tunnel or something? any help would be appreciated!
I'm also using
kubectl exec
If you are looking for a managed solution then Google is offering VPN for that:
https://console.cloud.google.com/networking/vpn/
If you are happy to roll your own then you can create a new Compute instance on the same network where your nodes are and set up openvpn there. This will give you a fix ip as a freebie.
A more advanced solution is if you run openvpn as a pod (or pods) and use a Service with NodePort to expose it. (Optionally manually create a single loadbalacer on google cloud to get a static ip for that.)
At the end of the day the ideal solution depends much on your environment and goal.
I have a local Kubernetes cluster on a single machine, and a container that needs to receive a url (like https://www.wikipedia.org/) and extract the text content from it. Essentially I need my pod to connect to the outside world. Since I am using v1.2.5, I need some DNS add-on like SkyDNS, but I cannot find any working example or tutorial on how to set it up. Tutorials like this usually only tell me how to make pods within the cluster talk to each other by DNS look-up.
Therefore, could anyone give me some advice on how to set up and configure an add-on of Kubernetes so that pods can access the public Internet? Thank you very much!
You can simply create your pods with "dnsPolicy: Default", this will give it a resolv.conf just like on the host and it will be able to resolve wikipedia.org. It will not be able to resolve cluster local services.If you're looking to actually deploy kube-dns so you can also resolve cluster local services this is probably the best starting point: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/