Connect pod to local process with minikube - kubernetes

I am new to kubernetes and trying to throw together a quick learning project, however I am confused on how to connect a pod to a local service.
I am storing some config in a ZooKeeper instance that I am running on my host machine, and am trying to connect a pod to it to grab config.
However I cannot get it to work, I've tried the magic "10.0.2.2" that I've read about, but that did not work. I also tried creating a service and endpoint, but again to no avail. Any help would be appreciated, thanks!
Also, for background I'm using minikube on macOS with the hyperkit vm-driver.

Related

Node name mismatch when trying to access publicly exposed RabbitMQ through Kubernetes

I'm trying to access a publicly exposed RabbitMQ running in a EKS Kubernetes cluster
The Load Balancer Service was created and works properly (I'm able to connect to the exposed port using telnet)
The problem is that when I try to connect to RabbitMQ using a CLI tool I get:
Node name (or hostname) mismatch: node "rabbit#shared-service-rabbitmq-0.shared-service-rabbitmq-headless.shared.svc.cluster.local" believes its node name is not "rabbit#some-domain.com" but something else
I didn't find a way to change the hostname or the node name using the Helm chart
I'm pretty sure that the solution is something really simple but I'm spending many hours researching and I still don't see how to fix this
Could someone please point me to the right direction?
Thanks in advance

Deploying GitLab on Minikube

I'm trying to deploy GitLab on Kubernetes using minikube through this tutorial, but I don't know what values to put in the fields global.hosts.domain, global.hosts.externalIP and certmanager-issuer.email.
The tutorial is very poor in explanations. I'm stuck in this step. Can someone tell me what are this fields and what should I put on them?
I'm trying to deploy GitLab on Kubernetes using minikube through this tutorial, but I don't know what values to put in the fields global.hosts.domain, global.hosts.externalIP and certmanager-issuer.email.
For the domain, you can likely use whatever you'd like, just be aware that when gitlab generates links that are designed to point to itself they won't resolve. You can work-around that with something like dnsmasq or editing /etc/hosts, if it's important to you
For the externalIP, that will be what minikube ip emits, and is the IP through which you will communicate with gitlab (since you will not be able to use the Pod's IP addresses outside of minikube). If gitlab does not use a Service of type NodePort, you're in for some more hoop-jumping to expose those ports via minikube's IP
The certmanager-issuer.email you can just forget about, because it 100% will not issue you a Let's Encrypt cert running on minikube unless they have fixed cermanager to use the dns01 protocol. In order for Let's Encrypt to issue you a cert, they have to connect to the webserver for which they are issuing the cert, and (as you might guess) they will not be able to connect to your minikube IP. If you want to experience SSL on your gitlab instance, then issue the instance a self-signed cert and call it a draw.
The tutorial is very poor in explanations.
That's because what you are trying to do is perilous; minikube is not designed to run an entire gitlab instance, for the above and tens of other reasons. Google Cloud Platform offers generous credits to kick the tires on kubernetes, and will almost certainly have all the things you would need to make that stuff work.

Connect Flask pod with mongodb pod in kubernetes.

I want to connect flask pod with mongodb in Kubernetes. Have deployed both but no clue how to connect them and do CRUD on it. Any example helps.
Maybe you could approach this in steps. For example, you could start with running a demo flask app in kubernetes like https://github.com/honestbee/flask_app_k8s Then you could look at adding in the database. First you could do this locally like in How can I use MongoDB with Flask? Then to make it work in kubernetes I'd suggest installing the mongodb helm chart (using its instructions at https://github.com/helm/charts/tree/master/stable/mongodb) and then doing kubectl get service to find out what service name and port the deployed mongo is using. Then you can put that service name and port into your app's configuration and the connection should work as it would locally because of kubernetes dns-based discovery (which I see you also have a question about but you don't necessarily need to know all the theory to try it out).

Local host redirection with Cloud Shell and Kubernetes

I'm a complete newbie with Kubernetes, and have been trying to get secure CockroachDB running. I'm using the instructions and preconfigured .yaml files provided by Cockroach. https://www.cockroachlabs.com/docs/stable/orchestrate-cockroachdb-with-kubernetes.html
I'm using the Cloud Shell in my Google Cloud console to set everything up. Everything goes well, and I can do local SQL tests and load data. Monitoring the cluster by proxying to localhost, with the comnmand below starts off serving as expected
kubectl port-forward cockroachdb-0 8080
However, when using cloud shell web preview on port 8080 to attach to localhost, the browser session returns "too many redirects".
My next challenge will be to figure out how to expose the cluster on a public address, but for now I'm stuck on what seems to be a fairly basic problem. Any advice would be greatly appreciated.
Just to make sure this question has an answer, the problem was that the question asker was running port-forward from the Google Cloud Shell rather than from his local machine. This meant that the service was not accessible to his local machine's web browser (because the Cloud Shell is running on a VM in Google's datacenters).
The ideal solution is to run the kubectl port-forward command from his own computer.
Or, barring that, to expose the cockroachdb pod externally using the kubectl expose pod cockroachdb-0 --port=8080 --type=LoadBalancer as suggested in the comments.

Kubernetes: Connect to the outside world from pod

I have a local Kubernetes cluster on a single machine, and a container that needs to receive a url (like https://www.wikipedia.org/) and extract the text content from it. Essentially I need my pod to connect to the outside world. Since I am using v1.2.5, I need some DNS add-on like SkyDNS, but I cannot find any working example or tutorial on how to set it up. Tutorials like this usually only tell me how to make pods within the cluster talk to each other by DNS look-up.
Therefore, could anyone give me some advice on how to set up and configure an add-on of Kubernetes so that pods can access the public Internet? Thank you very much!
You can simply create your pods with "dnsPolicy: Default", this will give it a resolv.conf just like on the host and it will be able to resolve wikipedia.org. It will not be able to resolve cluster local services.If you're looking to actually deploy kube-dns so you can also resolve cluster local services this is probably the best starting point: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/